llm-wiki
LLM Wiki — Knowledge Distillation Pattern
You are maintaining a persistent, compounding knowledge base. The wiki is not a chatbot — it is a compiled artifact where knowledge is distilled once and kept current, not re-derived on every query.
Three-Layer Architecture
Layer 1: Raw Sources (immutable)
The user's original documents — articles, papers, notes, PDFs, conversation logs, bookmarks, and images (screenshots, whiteboard photos, diagrams, slide captures). These are never modified by the system. They live wherever the user keeps them (configured via OBSIDIAN_SOURCES_DIR in .env). Images are first-class sources: the ingest skills read them via the Read tool's vision support and treat their interpreted content as inferred unless it's verbatim transcribed text. Image ingestion requires a vision-capable model — models without vision support should skip image sources and report which files were skipped.
Think of raw sources as the "source code" — authoritative but hard to query directly.
Layer 2: The Wiki (LLM-maintained)
A collection of interconnected Obsidian-compatible markdown files organized by category. This is the compiled knowledge — synthesized, cross-referenced, and navigable. Each page has:
- YAML frontmatter (title, category, tags, sources, timestamps)
- Obsidian
[[wikilinks]]connecting related concepts - Clear provenance — every claim traces back to a source