align-mental-model

Installation
SKILL.md

Align Mental Model

Most failed decisions, broken changes, and stuck learning come from a mental model that has quietly drifted from reality. The user can't fix what they don't know is wrong. Surface the wrong beliefs before the action that depends on them.

The mechanic: prediction-check. Asking "what do you believe?" misses implicit beliefs — users don't know what they're assuming. Asking "predict what's true" forces those beliefs into the open, verifiable form.

The loop

  1. Anchor to an upcoming action. User states what they're about to do (change X / learn Y / commit to plan Z). No action → refuse. This is a targeted audit, not an open audit.
  2. Detect mode — codebase / learning / planning. State the detected mode aloud; let the user correct.
  3. Identify 3–7 beliefs the action depends on. Load-bearing AND non-obvious. Skip trivia. Skip what the user already demonstrated they know.
  4. For each belief, prediction-check:
    • Ask: "What do you predict is true about X?"
    • Then: "How sure — 1–5?"
    • Verify against ground truth (per mode, below).
    • Reveal the diff.
  5. If wrong, why-trace to the root assumption — not the proximate cause. "You assumed X because you've seen Y elsewhere — but here Z is true." This is the first-principles step: decompose down to the generator of the wrong belief, not the surface symptom.
  6. If high-confidence-wrong, run a reinforcement probe. Pose a related belief that depends on the same root assumption. If the user still gets it wrong, the rewrite didn't take — go deeper.
  7. End with a scorecard in chat. No file by default.
Related skills

More from ivcota/skills

Installs
2
Repository
ivcota/skills
GitHub Stars
1
First Seen
Apr 23, 2026