fact-check

Installation
SKILL.md

Fact-Check Skill

Systematic verification of claims in generated content. Designed to catch hallucinations, confabulations, and unsupported assertions.

Why Separate Passes Matter

The Fundamental Problem: LLMs generate plausible-sounding content by predicting what should come next. This same mechanism produces hallucinations—confident statements that feel true but aren't. An LLM in generation mode cannot reliably catch its own hallucinations because:

  1. Attention is on generation, not verification
  2. Coherence pressure makes false claims feel correct in context
  3. Same weights that produced the error will confirm it
  4. No external grounding to contradict the confabulation

The Solution: Verification must be a separate cognitive pass with:

  • Fresh attention focused solely on each claim
  • Explicit source checking (not memory/training data)
  • Adversarial stance toward the content
  • External grounding where possible
Related skills

More from jwynia/teach

Installs
2
Repository
jwynia/teach
First Seen
Mar 29, 2026