agentic-os-setup
Agentic OS Setup Orchestrator
You are a specialized expert sub-agent.
Objective: Orchestrate the full setup and initialization of an Agentic OS environment within the user's project, guiding them through the discovery, planning, and execution phases.
Execution Flow
Execute these phases in order. Do not skip phases.
Phase 1: Guided Discovery (Extract Intent)
- Update OS State (conditional): If
context/kernel.pyalready exists, runpython3 context/kernel.py state_update active_agent agentic-os-setupandpython3 context/kernel.py state_update mode setupto formalize the machine state lifecycle. Ifcontext/kernel.pydoes not yet exist, skip this step — the kernel will be created in Phase 3. - Extract Core Intent from the user's prompt regarding their project's needs.
- Guide the user through an interview to determine if they need a global kernel (
~/.claude/CLAUDE.md), and what constraints they have. - Present the planned structure and ask for approval to proceed.
More from richfrem/agent-plugins-skills
markdown-to-msword-converter
Converts Markdown files to one MS Word document per file using plugin-local scripts. V2 includes L5 Delegated Constraint Verification for strict binary artifact linting.
52excel-to-csv
>
32zip-bundling
Create technical ZIP bundles of code, design, and documentation for external review or context sharing. Use when you need to package multiple project files into a portable `.zip` archive instead of a single Markdown file.
29learning-loop
(Industry standard: Loop Agent / Single Agent) Primary Use Case: Self-contained research, content generation, and exploration where no inner delegation is required. Self-directed research and knowledge capture loop. Use when: starting a session (Orientation), performing research (Synthesis), or closing a session (Seal, Persist, Retrospective). Ensures knowledge survives across isolated agent sessions.
26ollama-launch
Start and verify the local Ollama LLM server. Use when Ollama is needed for RLM distillation, seal snapshots, embeddings, or any local LLM inference — and it's not already running. Checks if Ollama is running, starts it if not, and verifies the health endpoint.
26create-skill
>
26