agentic-os-init
Agentic OS Init
Bootstrap the Agentic OS / Agent Harness structure into any project. The setup is not one-size-fits-all -- a solo developer using Claude for marketing strategy needs a very different environment than a team using agents to document a legacy system. The interview phase exists to get that right the first time.
There is no official Anthropic "agentic OS" reference implementation. This pattern synthesizes Anthropic's documented features (CLAUDE.md hierarchy, /loop, sub-agents, hooks) with community conventions for persistent memory and context management. Official Anthropic docs:
- CLAUDE.md and memory: https://docs.anthropic.com/en/docs/claude-code/memory
- /loop scheduled tasks: https://docs.anthropic.com/en/docs/claude-code/loop
- Hooks (automation): https://docs.anthropic.com/en/docs/claude-code/hooks
- Sub-agents: https://docs.anthropic.com/en/docs/claude-code/sub-agents
- Claude Code overview: https://docs.anthropic.com/en/docs/claude-code/overview
Execution Flow
More from richfrem/agent-plugins-skills
markdown-to-msword-converter
Converts Markdown files to one MS Word document per file using plugin-local scripts. V2 includes L5 Delegated Constraint Verification for strict binary artifact linting.
52excel-to-csv
>
32zip-bundling
Create technical ZIP bundles of code, design, and documentation for external review or context sharing. Use when you need to package multiple project files into a portable `.zip` archive instead of a single Markdown file.
29learning-loop
(Industry standard: Loop Agent / Single Agent) Primary Use Case: Self-contained research, content generation, and exploration where no inner delegation is required. Self-directed research and knowledge capture loop. Use when: starting a session (Orientation), performing research (Synthesis), or closing a session (Seal, Persist, Retrospective). Ensures knowledge survives across isolated agent sessions.
26ollama-launch
Start and verify the local Ollama LLM server. Use when Ollama is needed for RLM distillation, seal snapshots, embeddings, or any local LLM inference — and it's not already running. Checks if Ollama is running, starts it if not, and verifies the health endpoint.
26create-skill
>
26