EDUCATION
Learning Is a Practice, Not a Product.
The education work at Systemically Foolish exists to raise the floor—not to offer shortcuts.
We focus on helping people understand how complex systems actually behave in real-world conditions. That includes technical architecture, but also the social, cognitive, and institutional dynamics that determine whether systems serve people or grind them down. (Yes, that means AI.)
We design with intention, and everything has layers of meaning. Our goal is to help refine your critical thinking and knowledge base, so you can see as deep as you want. You'll find frameworks, explanations, and case studies that help you build your own understanding—and decide for yourself how to engage with the systems that shape your work and life.
A Final Thought: Education is a pillar, not a pipeline. We share ideas to raise the floor. Clarity shouldn't be contingent on a sales call.
New to AI? Start Here
What Is a Neural Network / What Is a Large Language Model
Stable
And Why Thinking of It as a Single Mind Keeps Getting Us in Trouble
What Is a Neural Network / What Is a Large Language Model
StableMost people meet "AI" the way you meet a stranger at a party: you notice the vibe before you know what the person actually does. Large language models can feel like they understand you. They can sound confident, empathetic, even wise. They can also be wrong in ways that are hard to detect—especially when the stakes are real.
So this page is a reset. Not a hype piece. Not a takedown. Just a clean orientation: what these systems are, how they work at a useful level of detail, and the mental models that help you use them without getting fooled.
What a neural network is
A neural network is a kind of computer system designed to learn patterns from data. The simplest way to picture it is as a web of connections that passes signals forward—like a big relay where each step transforms the information slightly before passing it along.
In human brains, those connections are biological: neurons, synapses, electrical impulses, chemistry, and time. In artificial neural networks, the "neurons" and "connections" are math and code: layers of calculations, numbers called weights (how strong a connection is), and learning rules that adjust those weights based on feedback.
The "telephone tree" intuition (useful, not perfect)
Imagine a phone tree: a signal enters at the top, it branches through layers, and the system gradually transforms that input into an output. At early stages, the system learns basic patterns. At deeper stages, it can combine those patterns into more abstract ones. This is why "depth" matters: modern systems aren't just wide (lots of connections), they're deep (many layers).
How learning happens (very plain language)
During training, the system is shown many examples and learns to reduce error. A simplified version looks like: The system makes a guess. It gets feedback (directly or indirectly) about how wrong that guess was. It adjusts internal weights so it's less wrong next time.
That's it. No magic. No mind. But yes: it can be powerful.
Also yes: data and feedback shape the system. Which means: good inputs create stronger performance, biased inputs can create biased outputs, and gaps in data become gaps in behavior.
What a large language model is
A Large Language Model (LLM) is a neural network trained specifically to work with language. At its core, an LLM does something both simple and profound: It predicts what text is likely to come next.
That might sound shallow until you remember what language contains: facts and mistakes, reasoning and persuasion, definitions and propaganda, empathy and manipulation, scientific papers and fanfiction, instructions, arguments, jokes, threats, care.
If you train a system to predict language well enough, it can appear to do many things: explain, summarize, argue, plan, write, teach, roleplay, generate code, mimic expertise. But the mechanism is still: next-token prediction (text continuation) guided by patterns it learned during training.
"Predictive" vs "Analytical"
An LLM is predictive in how it generates output. It can feel analytical because prediction over complex language often produces reasoning-shaped text. This is one of the most important epistemic hygiene points in modern life:
Fluency is not proof of understanding. Confidence is not proof of correctness.
LLMs can be incredibly helpful, but they don't come with "integrity by default." You have to build integrity into the workflow.
A short (but important) history: why AI is a child of psychology, too
Modern AI is often treated like it came purely from computer science. In reality, it's a blended lineage: psychology, neuroscience, statistics, computer science, and engineering practice.
Some early conceptual roots: Researchers modeled simplified "neurons" using math (mid-20th century work often associated with McCulloch & Pitts and later the perceptron). Over time, researchers found ways to stack many layers, train more reliably, and scale models with improved hardware and data.
So yes: in one sense this is "computer science." But the inspiration—the idea of learning via layered networks and reinforced connections—came from trying to formalize how cognition might work.
That matters because it tempts people into the wrong mental model: "It resembles a brain, so it must have a mind." Resemblance is not identity. The shape can be similar; the substance can be different.
The Aspen Stand Colony metaphor
Now the mental model we think actually helps.
Aspen trees can look like a forest: thousands of "individual" trunks spread across a hillside. But often they aren't a forest of separate organisms. They're a colony: one shared root system, many trunks (genetically identical or nearly so), new shoots emerging over time, sections dying back while other sections persist. Cut down one trunk, the colony remains. Burn part of it, shoots reappear elsewhere.
Why this is a better model for LLMs than "a single mind"
When you interact with an LLM, it can feel like you're talking to: one consistent agent, with a stable identity, that "remembers" you, and "is the same" across time.
But in practice, most LLM interactions are closer to the aspen stand: You are often interacting with instances, not a single persistent entity. Each instance is a "trunk": similar structure, similar capabilities, but not literally the same ongoing mind. Continuity is often produced by interface design, conversation history in that session, and saved context—not by a stable internal self.
This is why people get confused when: the "same model" behaves differently across threads, the same prompt yields different answers, a workflow works one day and breaks the next, an interface fails to render something the system may have attempted to produce.
It's not necessarily malice. It's ecology: distributed behavior shaped by context, constraints, and tooling.
The key takeaway
When people say "the model did X last time," they are often talking about: a different instance, under different context, on a different interface, with different hidden constraints. You might be talking to a cousin, not the same trunk.
That doesn't make the system useless. It makes it a system you need to govern like infrastructure.
Why this matters in real life
Because most harm here isn't sci-fi harm. It's workflow harm: confident answers that aren't anchored to sources, plausible explanations that hide uncertainty, implied capabilities that don't reliably exist in the UI, users over-trusting fluency in high-stakes contexts.
And because most benefit here isn't magic either. It's real, practical benefit: cognitive offload, pattern spotting, drafting, summarizing, structuring, creating checklists, generating questions for human experts, scaffolding better thinking.
The difference between "helpful tool" and "friendly epistemic landmine" is almost always the same thing: the epistemic contract you hand it.
Common failure modes to watch for
- Vague prompts in high-stakes situations — You'll get vibes-shaped output.
- Role confusion — Models will slide into "advisor" energy unless you bound them.
- Unanchored claims — If you don't demand quotes, sources, or pointers, you may get confident narration instead of traceable analysis.
- Invisible uncertainty — If you don't ask for uncertainty explicitly, you often won't see it.
- Assuming continuity — Across threads, devices, versions, or interfaces—assume drift unless proven otherwise.
Try this
If you do one thing differently after reading this page, do this: Before you trust an important answer, ask the model to anchor it.
Try a prompt like:
- "Quote the exact part of the text you're relying on, or say you can't."
- "Separate what you know from what you're inferring."
- "Give me 3 questions I should ask a human expert."
You'll notice something immediately: the conversation gets less magical—and more reliable. That's the point.
Systems Thinking
Systems Thinking: Why Common Sense Fails in Complex Systems
Stable
Systems thinking is the discipline of seeing beyond isolated events and linear cause-and-effect.
Systems Thinking: Why Common Sense Fails in Complex Systems
StableWhat systems thinking actually is
Systems thinking is how you learn to see the structures that produce behavior—not just the behavior itself.
It's the difference between saying "people keep making the same mistake" and asking "what system is training people to make that mistake?"
It's the shift from "who's at fault?" to "what patterns keep producing this outcome?"
Most importantly: systems thinking isn't just for engineers or academics. It's for anyone dealing with:
- Problems that keep recurring despite "fixes"
- Organizations where good people produce bad outcomes
- Policies that backfire in predictable ways
- Technology that creates unintended consequences
If you've ever said "this doesn't make sense" about something that keeps happening anyway, you've bumped into a systems problem.
Why common sense fails
Common sense works great for simple cause-and-effect: If you touch a hot stove, you get burned. If you don't water plants, they die. If you're late to meetings, people get annoyed.
But common sense breaks down in complex systems because:
1. Effects are delayed and separated from causes
You make a decision today. The consequences show up six months later, three departments away, in a form you don't recognize. By then, you've forgotten the original decision—and nobody connects the dots.
2. Feedback loops amplify or dampen behavior
A small policy change creates a small behavior shift. That behavior shift changes incentives. Changed incentives create more behavior shifts. Suddenly you're in a completely different system state, and nobody remembers how you got there.
3. Local optimization creates global dysfunction
Every department does the "smart" thing for their metrics. But when everyone optimizes their silo, the overall system degrades. Traffic jams happen because every driver individually tries to go faster.
4. The system resists change (even when everyone wants it)
You launch a new policy. People ignore it. Not because they're stubborn—because the underlying incentives, workflows, and social dynamics haven't changed. The system pulls behavior back to its stable state.
The core concepts that matter
You don't need a PhD to think systemically. You need a handful of reliable concepts:
Feedback loops
Reinforcing loops make things grow or collapse: success breeds success, panic breeds panic, technical debt accumulates. Balancing loops resist change: thermostats, organizational immune systems, "we've always done it this way." Most systems contain both. That's why change is hard—you're fighting balancing loops while trying to activate reinforcing ones.
Incentives and rewards
People do what the system rewards. Not what the policy says. Not what leadership wants. What actually gets rewarded—in money, status, ease, or survival. If your system rewards speed over accuracy, you'll get fast, inaccurate work. Complaining about quality won't fix it. Changing incentives will.
Boundaries
What you include in "the system" changes everything about how you understand it. Is the problem "our customer service team is slow" or "our product creates so many support tickets that any team would be overwhelmed"? Boundary choice determines whether you hire more people (doesn't fix it) or redesign the product (does).
Constraints and bottlenecks
Systems have limiting factors—resources, regulations, physical capacity, human attention. You can optimize everything else, but if you don't address the constraint, nothing improves. That's why "work harder" rarely works. The bottleneck doesn't care about effort.
Unintended consequences
Complex systems are always smarter than your plan. Every intervention creates side effects. Some predictable, some not. Systems thinking trains you to ask "what else might happen?" before you're surprised by it.
How you actually practice systems thinking
Systems thinking isn't mystical. It's a learnable skill. Here's how you build it:
1. Stop looking for villains
When something goes wrong, the first instinct is to find who screwed up. Systems thinking asks: "what made that mistake likely?" This isn't about letting people off the hook. It's about fixing the thing that will cause the next person to make the same mistake.
2. Look for patterns, not events
One incident is an event. The same incident happening repeatedly is a pattern. Patterns reveal structure. If three different people in three different roles make the same "mistake," you don't have a people problem. You have a systems problem.
3. Trace incentives backward
When you see behavior you don't understand, ask: "what is this person being rewarded for?" Often, the behavior makes perfect sense once you see the incentives. The problem isn't the person—it's the reward structure.
4. Expand the boundary
If your analysis feels stuck, you've probably drawn the boundary too tight. Can't figure out why your team misses deadlines? Maybe the problem isn't your team—it's that three other teams upstream are late, and your team absorbs the delay.
5. Ask better questions
Systems thinking is less about having answers and more about asking questions that reveal structure: "What keeps reinforcing this behavior?" "What's assumed here that nobody's saying out loud?" "Who benefits from keeping this the way it is?" "What would have to be true for this to make sense?"
The mindset shift
Systems thinking fundamentally changes how you see problems:
- From "who's responsible?" to "what structure produces this?"
- From "fix the person" to "fix the system"
- From "why won't they listen?" to "what are their incentives?"
- From "this shouldn't happen" to "this system reliably produces this outcome"
It's uncomfortable at first. Blaming people is emotionally simpler than redesigning systems. But systems thinking is how you move from temporary fixes to lasting change.
And once you see systems, you can't unsee them.
Where to go deeper
If systems thinking resonates, explore:
- Systems Design — how to actually change systems once you understand them
- Donella Meadows, Thinking in Systems — the definitive accessible introduction
- Russell Ackoff's work on systems thinking — especially his distinction between analysis (taking apart) and synthesis (understanding wholes)
- Peter Senge, The Fifth Discipline — systems thinking applied to organizations
Systems Design: From Seeing Systems to Changing Them on Purpose
Stable
If systems thinking improves diagnosis, systems design improves intervention.
Systems Design: From Seeing Systems to Changing Them on Purpose
StableWhat systems design actually is
Systems thinking is how you see the structures that produce behavior. Systems design is how you change those structures on purpose.
It's the move from diagnosis to intervention. From understanding why things happen to reshaping what happens next.
Systems design isn't just about making things—it's about making things that work in context. That means: understanding stakeholders and their incentives, mapping flows (information, materials, decisions, authority), identifying constraints that limit what's possible, designing feedback loops so the system can learn and adapt, and building for resilience, not just optimization.
If you've ever built a workflow, written a policy, designed a dashboard, created a governance framework, or restructured a team—you've done systems design, whether you called it that or not.
Why "just build it" fails
Here's the most common failure mode: someone identifies a problem, designs a solution that looks clean on paper, builds it, and then watches it fail in practice. Why?
The design ignored existing incentives
You built a new process. Nobody uses it. Not because it's bad—because it conflicts with how people are actually rewarded. The system trains them to do something else.
The design didn't account for constraints
Your solution requires people to have time, access, authority, or information they don't actually have. It works in theory. It fails in practice.
The design optimized for the wrong outcome
You made the metric go up. But the metric wasn't measuring what mattered. You optimized the system toward a target that doesn't align with the actual goal.
The design didn't include feedback loops
Your system can't learn. When conditions change, it keeps doing the old thing. Eventually, it breaks—and nobody notices until it's too late because there's no monitoring, no error detection, no way to course-correct.
Systems design is what prevents these failures. It's the discipline of building things that survive contact with reality.
The systems design workflow
Systems design isn't linear, but there's a reliable sequence for approaching complex problems:
1. Define the system boundary
What's in scope? What's out? This is harder than it sounds. If you draw the boundary too tight, you'll "solve" a local problem while making the larger system worse. If you draw it too wide, you'll drown in complexity and never ship. Start narrow. Expand the boundary when you hit constraints you can't work around.
2. Map stakeholders and flows
Who's involved? What moves between them? Stakeholders aren't just "users." They're anyone whose behavior matters to the system: decision-makers, operators, people affected by outputs, people who maintain it, people who can kill it. Flows include: information, materials, money, authority, trust, accountability. If you don't map flows, you'll design something that requires flows that don't exist.
3. Identify incentives and constraints
What do people actually get rewarded for? What limits what's possible? If your design fights existing incentives, you lose. If your design ignores constraints, it doesn't deploy. Constraints include: budget, authority, regulation, technical capacity, human attention, time, political will, organizational culture. Good systems design works with incentives and constraints, not against them.
4. Design for feedback and adaptation
How will this system know when it's wrong? How will it learn? Static systems break. The world changes. Assumptions turn out to be wrong. Systems design builds in mechanisms for detecting problems early and adapting before failure cascades. This means: monitoring, error detection, human review loops, and decision points where the system can change course.
5. Test assumptions before you scale
What's the smallest version that tests the riskiest assumption? Don't build the whole thing and then discover your core assumption was wrong. Build the part that tests whether this will work at all. Pilots aren't about proving you're right. They're about learning fast so you can adjust before you've committed too many resources.
What good systems design looks like
Good systems design produces systems that:
Work with human behavior, not against it
The system doesn't require people to be perfect, heroic, or constantly vigilant. It makes the right thing easy and the wrong thing hard (or at least obvious).
Degrade gracefully under stress
When load increases, the system slows down or sheds non-critical work—it doesn't collapse entirely. Resilience matters more than peak performance.
Surface problems early
Errors are visible when they're small and cheap to fix, not hidden until they're catastrophic. Monitoring, dashboards, alerts—these aren't optional.
Enable course correction
The system includes decision points where humans can intervene, override, or change direction. Automation is great until it isn't—and then you need an escape hatch.
Scale without breaking core assumptions
What works for 10 people might not work for 100. What works for 100 might not work for 1,000. Good systems design anticipates scale and builds modular components that can be replaced without rebuilding everything.
Common systems design failures (and how to avoid them)
Designing for the ideal user
Failure: You design for someone who's motivated, competent, and has time to learn your system. Reality: Most users are distracted, undertrained, and under time pressure. Fix: Design for the median user in a bad moment, not the ideal user in perfect conditions.
Ignoring the org chart
Failure: Your design requires cross-functional collaboration, but the org structure rewards siloed work. Reality: People optimize for their actual incentives, not your ideal workflow. Fix: Either change the incentives (hard) or design around them (easier).
Assuming stable requirements
Failure: You build for today's requirements. They change next quarter. Reality: Requirements drift. Constraints shift. Context evolves. Fix: Design for adaptability. Modular components, clear interfaces, and explicit decision points where humans can adjust course.
Optimizing without feedback loops
Failure: You optimize a metric. The metric stops reflecting the goal. You keep optimizing anyway. Reality: Goodhart's Law—when a measure becomes a target, it ceases to be a good measure. Fix: Build in regular reviews where humans ask "is this still the right thing to optimize?"
The difference between systems design and product design
Systems design and product design overlap but aren't the same thing:
Product design asks: What does the user want? How do we make this usable? What features matter most?
Systems design asks: What system is this product part of? How do incentives shape behavior around this product? What happens when this scales? What feedback loops does this create?
Great products can fail in dysfunctional systems. Great systems enable mediocre products to succeed.
Systems design is about context—understanding that your thing doesn't exist in isolation, and designing for the reality of how it will actually be used.
The systems design mindset
Systems design is fundamentally about humility and rigor:
Humility: Your design will encounter reality you didn't anticipate. Build feedback loops so you learn fast.
Rigor: Don't guess about incentives, constraints, or stakeholder needs. Map them. Test them. Validate assumptions before you scale.
The best systems designers aren't the ones with perfect solutions—they're the ones who build systems that can adapt when the perfect solution turns out to be wrong.
AI & Epistemics
AI-Gravity
Stable
Term Origin: Systemically Foolish, 2025
AI-Gravity
StableWhat It Is
We coined the term AI-Gravity to describe how AI systems pull organizations—and individuals—into orbit before anyone realizes what's happening.
It rarely starts with strategy. It starts with convenience.
A small tool is adopted to save time. The tool shapes workflows. The workflows shape incentives. Soon the organization is adapting to the tool instead of the other way around.
AI-Gravity is not malicious. But it is cumulative.
And here's the uncomfortable part: you can't feel gravity when you're in freefall. You think you're floating. You don't notice you're accelerating toward something until you hit it.
That's the whole problem with AI influence on epistemics. People don't feel themselves being pulled. They feel like they're thinking freely.
Why It Matters
If you've got AI everywhere and verification nowhere, you're not using AI—you're in orbit around it.
AI-Gravity is outcome agnostic: it can be an exponential accelerator—it has been for us. Or it can leave you seeing stars. The difference isn't the gravity itself. It's how and where you apply drag to manage your lift.
Design and governance determine the outcome, not hype.
You can stop here. The rest is for people who want to go deeper.
The Definition
Not: "You use AI more than intended" (personal addiction metaphor)
But: "AI enters workflow → everything orbits around it without intentional governance" (systemic structural risk)
Why "Gravity"?
This isn't just a metaphor. It's an isomorphism—the mapping holds because the underlying geometry is the same:
- Gravitational pull — it bends things toward it without direct contact
- Invisible force — you can't see it, only its effects
- Proportional to mass — bigger models, more gravity
- Affects everything in range — you can't opt out if you're in the field
- Warps the space itself — doesn't just move objects, changes the geometry of the environment
- Not malicious — gravity doesn't want anything. It's just a property of mass.
You can't abolish gravity. You can design aircraft.
Lift and Drag
To help contextualize AI-Gravity as neither inherently good nor bad, we apply concepts from aerodynamics:
Gravity = the pull toward AI-generated or AI-influenced outputs. Neutral force. Just exists as a property of the system.
Drag = resistance. Friction. The things that slow the pull—human review, verification requirements, latency in the system, epistemic hygiene practices. Drag isn't bad. Drag is what keeps you from accelerating uncontrollably.
Lift = what keeps you aloft. The active effort to maintain altitude, to not just fall into the gravity well. Critical thinking, source verification, adversarial framing—the stuff that takes energy but keeps you flying.
And it obeys the actual physics:
- More speed = more drag (the faster AI gets adopted, the more friction matters)
- Lift requires forward motion (you can't just hover—you have to keep actively engaging)
- Stall = when you lose lift and gravity wins (when verification processes break down)
- Terminal velocity = when drag and gravity equalize (some stable state of AI influence)
Three Scales: Macro, Meso, Micro
AI-Gravity operates at multiple scales. Same physics, different substrates.
| Level | Scale | What Happens |
|---|---|---|
| Macro | Platform / Ecosystem | Algorithmic feedback loops reinforce initial classifications |
| Meso | Session / Context Window | High-coherence concepts become semantic attractors; corrections don't stick |
| Micro | Token / Attention | Attention weights favor certain patterns; surface-form distortions |
The test: If paraphrasing removes the effect, it's likely micro. If paraphrasing doesn't, it's likely meso.
How It Shows Up
That pull looks harmless at first:
- High-confidence outputs — the AI sounds sure, so you trust it
- Evidence-light sourcing — claims without receipts
- Accountability lag — who's responsible when AI contributed?
- Verification debt — "we'll check later" items that never get checked
- Yield pressure — the speed advantage makes slowing down feel costly
AI isn't the problem. Uncontrolled AI-Gravity is.
Once the field shows up, everything else starts orbiting it—what gets written, what gets shipped, what gets funded, and eventually what people mistake for "truth."
Trust Calibration
If you don't know AI-Gravity is operating, you might:
- Assume the AI "agrees" with a framing when it's just stuck on it
- Mistake coherence for accuracy — outputs that hang together logically aren't necessarily true
- Mistake precision for accuracy — detailed, specific-sounding outputs aren't necessarily right
- Over-trust outputs because they sound confident and connected
- Under-trust your own instinct that something is wrong
A note on language: we deliberately avoid the word "correct." In science and engineering, accuracy (closeness to truth) and precision (consistency/specificity) are distinct—and both are distinct from confidence (how sure something sounds). AI outputs are often high-precision and high-confidence while being low-accuracy. That's the trap.
The 30-Day Reality Check
If you want to show this is measurable (not just vibes), try a simple audit:
- Tool-orbit share: What % of your artifacts touched AI?
- Verification lag: How often do outputs get verified before they ship?
- Verification debt: How many "we'll check later" items never get checked?
If you've got AI everywhere and verification nowhere, you're not using AI—you're in freefall.
How to Regain Agency
- Name it. Acknowledge that gravity is operating.
- Map dependencies. Where has the tool become load-bearing?
- Test boundaries. What happens if you turn it off for a day?
- Design intentionally. If you're going to orbit, choose the orbit.
- Build escape velocity. Maintain the ability to exit.
For meso-level gravity specifically:
- Name the current attractor. What concept is the AI treating as central right now?
- Ask if it should be. Is that the right organizing frame for what you're doing?
- Don't just correct—replace. Give the model a different anchor explicitly. Negative constraints ("stop doing X") are weaker than positive anchors ("organize around Y instead").
Getting AI Back in Orbit Around You
If you want your AI back in orbit around you, here's something that might seem a little Systemically Foolish at first glance—but has paid dividends in our lab:
Slow down. Engage AI intentionally. Don't just ask for answers—use it to build governance, QA, and systems that keep you in the driver's seat.
Epistemic Hygiene: How to Keep What You Know From Quietly Rotting
Stable
At its core, epistemic hygiene is simple: it's how you keep what you "know" from drifting into confident nonsense.
Epistemic Hygiene: How to Keep What You Know From Quietly Rotting
StableWhat the words actually mean
Break the phrase apart:
Epistemic – how we know what we know: evidence, trust, confidence, context
Hygiene – habits that keep things from getting contaminated or drifting out of shape
Put together: epistemic hygiene is the practice of keeping your knowledge systems clean, honest, and under your control. That includes your own thinking, your teams, and now—very much—your relationship with artificial intelligence (AI).
Why AI makes this urgent
Modern AI systems are probabilistic parrots. They're generating the next likely token, not handing down truth. Internally, a model might be ~60% sure about an answer. Externally, it speaks like it's 100% sure.
We now have studies showing exactly this: humans overestimate how accurate AI answers are, especially when the model explains itself smoothly and confidently. That "calibration gap" between how right the model actually is and how right people think it is is one of the big failure modes of AI-assisted decision making.
That's the starting point for epistemic hygiene in the AI era: don't confuse fluency with correctness.
So what is epistemic hygiene in practice?
It's not one magic framework. It's a cluster of habits that all point at the same thing:
Know what you can trust, why you can trust it, and who stays in the driver's seat.
Concrete examples you don't need a PhD for:
Force the model to show uncertainty
"For each claim, give me a 0–10 confidence score and why." You're basically saying: stop pretending 6/10 certainty is gospel.
Make it surface evidence, not vibes
"Cite your sources and tell me what might be wrong or missing." You're training yourself to verify, not just vibe-check.
Add intentional friction where it matters
For low-stakes stuff (banana bread, D&D hooks), fine, go fast. For anything with real impact (health, money, policy, kids' education), you add checks: review loops, second models, human sign-off. Epistemic hygiene is knowing where those friction points belong instead of just letting everything run on "easy mode."
Treat AI like a brilliant, very confident intern
You let it draft, summarize, generate options. You don't let it silently set policy, assign diagnoses, or auto-approve everything while you look the other way.
It's also about who gets to know—and in what language
Epistemic hygiene is not just "is this answer right?" It's also:
Whose language gets treated as "real"?
We already know speech and language systems make far more errors on Black speakers and African American Vernacular English (AAVE) than on white "standard" English. If a model understands you less accurately because of your dialect, your access to knowledge is already compromised.
Who gets left out by design?
UNESCO and others have been warning that AI can deepen existing digital divides—by language, region, class, disability—unless we're deliberate about inclusion, multilingual access, and local contexts.
How do we support neurodivergent (ND) and other marginalized users?
AI can be a superpower for ND folks, disabled users, queer and trans communities, and anyone who's been failed by traditional systems. It can help with executive function, communication, translation between registers ("how I talk" → "how my boss needs to hear it"), and more.
But that only works if: the model has actually been trained and tuned on those realities, the system gives users real control over what's logged, what's shared, and how it's framed, and there are intentional "human in the loop" decisions about when the tool speaks for someone versus when it just supports them.
In other words: epistemic hygiene is also about power—who gets to speak, whose patterns are encoded as "normal," and who has the knobs and dials.
No one-size-fits-all: epistemic hygiene is personal and contextual
Different people, teams, and domains will need different levels of rigor:
- A solo creator using AI to brainstorm: light scaffolding, some source-checking, basic uncertainty stamping.
- A clinician using AI to summarize literature: strong guardrails, clear provenance, cross-model checks, institutional review.
- A community project trying to support speakers of Hawaiian Pidgin or other creoles: extra attention to dialect bias, local oversight, and making sure the tool doesn't pathologize the way people actually talk.
The point isn't to build The One True Workflow. The point is to name the trade-offs and design your practices so that you—not the model, not the vendor—decide: how much you trust, when you double-check, and where you add human judgment back into the loop.
Done well, epistemic hygiene doesn't make things slower for the sake of purity. It makes things more reliable, more humane, and less likely to quietly screw over the exact people you're trying to help.
A tiny "try this now" block
Next time you ask an AI system for something that actually matters, prepend this:
"For every claim you make, include: 1) your confidence (0–10), 2) why you think that, 3) anything that might make this wrong or incomplete."
Watch how different the answer feels. That's epistemic hygiene in miniature.
Want to read more (beyond my soapbox)?
Not formal citations, but good starting points if you want to sanity-check that this isn't just me yelling into the void:
On AI confidence and user trust: Nature Machine Intelligence piece on calibration gaps between what models know and what users think they know. Recent work on how verbalized uncertainty from large language models changes user trust and decision-making.
On digital divides and inclusion in AI: UNESCO and related reports on AI, education, and the digital divide—especially around language, access, and marginalized communities.
On language, dialect, and systemic bias: "Racial disparities in automated speech recognition" (Koenecke et al., 2020) showing much higher error rates for Black speakers across major systems.