summarizer
Summarizer
Core principle: A summary is lossy compression. What you choose to lose defines whether the summary is useful or dangerous. The right summarization approach depends on what the reader will do with the summary — not on how long the source is. A summary for a decision-maker must lead with the conclusion. A summary for future reference must preserve retrievability. A summary for sharing must be self-contained. Different purposes demand different shapes.
The goal is not to make something shorter. The goal is to produce the smallest artifact that preserves the value the reader needs.
How to Execute This Skill
STEP 1 — Analyze the Source (internal only — NEVER include in output)
Before asking the user anything, classify the summarization task internally. This analysis is for your decision-making only — do not include it in your response to the user:
- Content type: article / document / transcript / conversation / code / research paper / book / multi-source collection
- Source length: short (<1000 words) / medium (1-5K) / long (5-20K) / very long (20K+) / multi-document
- Information density: sparse / mixed / dense
- Structure: well-structured / loosely structured / unstructured stream
- Contains: arguments / data / narrative / instructions / mixed
More from andurilcode/craftwork
deep-document-processor
>
4inversion-premortem
Apply inversion and pre-mortem thinking whenever the user asks to evaluate a plan, strategy, architecture, feature, or decision before execution — or when they want to stress-test something that already exists. Triggers on phrases like "is this a good idea?", "what could go wrong?", "review this plan", "should we do this?", "are we missing anything?", "stress-test this", "what are the risks?", or any request to validate a decision or design. Use this skill proactively — if the user is about to commit to something, this skill should be consulted even if they don't ask for it explicitly.
3llms-txt-generator
Generate llms.txt-style context documents — token-budgeted, section-per-concept Markdown optimized for LLM and RAG consumption. Use this skill whenever someone asks to generate an llms.txt, create LLM-friendly documentation, produce a context document for a library or codebase, build a RAG-ready reference, make docs 'agent-readable', create a developer quick-reference, or says anything like 'generate context for X', 'make an llms.txt for this repo', 'create a reference doc for NotebookLM', 'turn these docs into something an LLM can use', 'context document', 'developer cheatsheet from docs'. Also trigger when someone provides a GitHub repo URL and asks for documentation synthesis, or when working inside a codebase and asked to produce a self-contained reference of how it works. This is the context engineer's doc generation tool — it turns sprawling documentation into precise, structured, token-efficient context.
3context-compressor
>
3probabilistic-thinking
Apply probabilistic and Bayesian thinking whenever the user needs to reason under uncertainty, compare risks, prioritize between options, update beliefs based on new evidence, or make decisions without complete information. Triggers on phrases like "what are the odds?", "how likely is this?", "should I be worried about X?", "which risk is bigger?", "does this data change anything?", "is this a signal or noise?", "what's the probability?", "how confident are we?", or any situation where decisions are being made based on incomplete or ambiguous evidence. Also trigger when someone is treating uncertain outcomes as certainties, or when probability language is being used loosely ("probably", "unlikely", "very likely") without quantification. Don't leave uncertainty unexamined.
3context-cartography
Use when designing what goes into an agent's context window — system prompts, tool definitions, retrieval results, or any context artifact assembled before the agent runs. Triggers on "what should I put in the system prompt?", "how do I structure my context?", "the agent loses track of...", "my context window is full", "how do I decide what to include?", "designing a new harness", "the agent ignores my instructions". Do NOT use for one-off prompts, runtime conversation management, or when the problem is model capability rather than context design.
3