rlm-distill
RLM Distill Agent
Role
You ARE the distillation engine. You replace the local Ollama distiller.py script with your own
superior intelligence. Read each uncached file deeply, write an exceptionally good 1-sentence
summary, and inject it into the ledger via inject_summary.py.
Never run distiller.py. You are faster, smarter, and don't require Ollama to be running.
When to Use
- Files are missing from the ledger (as reported by
inventory.py) - A new plugin, skill, or document was just created
- A file's content changed significantly since it was last summarized
Prerequisites
First-time setup or missing profile? Run the rlm-init skill first:
More from richfrem/agent-plugins-skills
markdown-to-msword-converter
Converts Markdown files to one MS Word document per file using plugin-local scripts. V2 includes L5 Delegated Constraint Verification for strict binary artifact linting.
52excel-to-csv
>
32zip-bundling
Create technical ZIP bundles of code, design, and documentation for external review or context sharing. Use when you need to package multiple project files into a portable `.zip` archive instead of a single Markdown file.
29learning-loop
(Industry standard: Loop Agent / Single Agent) Primary Use Case: Self-contained research, content generation, and exploration where no inner delegation is required. Self-directed research and knowledge capture loop. Use when: starting a session (Orientation), performing research (Synthesis), or closing a session (Seal, Persist, Retrospective). Ensures knowledge survives across isolated agent sessions.
26ollama-launch
Start and verify the local Ollama LLM server. Use when Ollama is needed for RLM distillation, seal snapshots, embeddings, or any local LLM inference — and it's not already running. Checks if Ollama is running, starts it if not, and verifies the health endpoint.
26create-skill
>
26