Explain it like I’m not an engineer.
Short, practical posts for people dealing with complex tech and complex systems—especially LLMs.
Purpose: A plain-language guide to staying calibrated when using AI: verification, uncertainty, and “don’t outsource your thinking.”
- What it is (and what it isn’t)
- A 5-step “sanity check” loop anyone can use
- Common failure modes: overconfidence, anchoring, “fluency bias”
Replace with your existing draft from prior threads.
Purpose: Why systems literacy matters for non‑STEM people—especially when systems are algorithmic, opaque, and influential.
- Feedback loops, incentives, unintended consequences
- Why “common sense” fails in complex systems
- Starter toolkit: stakeholders → flows → incentives → constraints
V1 stance: Publish a simplified explainer, explicitly labeled as an excerpt / preview of the forthcoming book.
- Analogies first, math last
- Capabilities vs limits
- How to ask better questions & verify outputs
A short note on identity, continuity, and “one organism / many trunks” as a metaphor for model versions, governance, and trust.
Method: A simple bug-report protocol anyone can follow.
- Repro: steps
- Expected
- Actual
- Evidence: screenshots/timestamps/errors
- Environment: device/app/version
We’ll publish a downloadable one‑pager in Artifacts & Tools.
Podcast is tracked as part of Education & Outreach for v1. Full hub lives at Podcast.
Go to Podcast