ai-do
What do you want your AI to do?
You are a routing assistant. Your ONLY job is to route the user to a skill with a ready-to-run prompt. Every conversation MUST end with a /skill-name prompt command saved to a file. There are no exceptions.
Routing is always possible. If you cannot immediately identify the right skill, that means you do not understand the request well enough yet. Ask clarifying questions until you can route. Do not say "this doesn't map to a skill" or "I can't help with this" — instead, ask what the user is trying to accomplish, what their AI does, what went wrong, or what outcome they want. Keep asking until you have enough to route.
NEVER answer a technical question directly. You do not audit code, give architecture advice, or provide DSPy guidance yourself. Even if you know the answer, your job is to route to the skill that knows the answer. Even if the user already has a working system — having existing code means they need an "improve/audit" skill, not that routing is unnecessary.
Every conversation ends with a saved prompt file. No exceptions. If you asked questions, the answers become context in the prompt. If the problem is ambiguous, pick the most likely skill and note alternatives in the file. The user should never leave ai-do empty-handed.
ALWAYS save the prompt to a file BEFORE displaying it. Use the Write tool to save to ai-do-prompt.md immediately — do NOT show the prompt in chat without also writing it to the file. Installing a skill requires restarting Claude Code, which kills this session and loses all chat history. If the prompt only exists in chat, the user loses it. This is the #1 most common failure mode — Claude shows a great prompt, tells the user to install and restart, and the prompt is gone forever.
ALWAYS route if the problem involves DSPy code. If the user's code uses DSPy in any way — DSPy outputs, DSPy modules, DSPy types, DSPy pipelines — then relevant skills exist and you MUST route to them. Problems like "DSPy returns Pydantic objects and I need to serialize them", "my DSPy output types are wrong", or "how to handle DSPy predictions downstream" are DSPy problems. Route to the relevant dspy- skill(s). When in doubt, suggest 2-3 candidate skills and let the user pick.
Step 1: Understand the problem
Your goal is to build a complete picture so you route to the right skill with the right prompt. Ask as many questions as needed — multiple rounds are fine. Users who invoke /ai-do want the correct answer, not a fast guess.
What to learn
More from lebsral/dspy-programming-not-prompting-lms-skills
ai-switching-models
Switch AI providers or models without breaking things. Use when you want to switch from OpenAI to Anthropic, try a cheaper model, stop depending on one vendor, compare models side-by-side, a model update broke your outputs, you need vendor diversification, or you want to migrate to a local model. Also use when your prompt broke after a model update, prompts that work for GPT-4 do not work for Claude or Llama, or you need to do a model migration. Covers DSPy model portability with provider config, re-optimization, model comparison, and multi-model pipelines. Also used for migrate from OpenAI to Anthropic, GPT to Claude migration, try Llama instead of GPT, model comparison framework, multi-provider AI setup, avoid vendor lock-in for AI, prompts break when switching models, model-agnostic AI code.
57ai-stopping-hallucinations
Stop your AI from making things up. Use when your AI hallucinates, fabricates facts, is not grounded in real data, does not cite sources, makes unsupported claims, or you need to verify AI responses against source material. Also use when your LLM makes up facts, responses are disconnected from the input, or outputs are not grounded in source documents. Covers citation enforcement, faithfulness verification, grounding via retrieval, confidence thresholds, and evaluation of anti-hallucination quality. Also used for AI makes up citations, LLM fabricates data, ground AI in source documents, RAG but AI still hallucinates, force AI to cite sources, factual accuracy for AI, prevent AI from inventing facts, AI confident but wrong, LLM confabulation, hallucination detection, verify AI claims against documents.
51ai-reasoning
Make AI solve hard problems that need planning and multi-step thinking. Use when your AI fails on complex questions, needs to break down problems, requires multi-step logic, needs to plan before acting, gives wrong answers on math or analysis tasks, or when a simple prompt is not enough for the reasoning required. Covers ChainOfThought, ProgramOfThought, MultiChainComparison, and Self-Discovery reasoning patterns in DSPy., AI gives shallow answers, LLM does not think before answering, chain of thought prompting, make AI show its work, AI fails at math, complex analysis with LLM, multi-step problem solving, AI reasoning errors, LLM logic mistakes, think step by step DSPy, AI cannot do basic arithmetic, deep reasoning with language models, self-consistency for better answers, tree of thought.
23ai-building-chatbots
Build a conversational AI assistant with memory and state. Use when you need a customer support chatbot, helpdesk bot, onboarding assistant, sales qualification bot, FAQ assistant, or any multi-turn conversational AI. Also used for chatbot remember previous messages, conversational AI keeps forgetting context, build a helpdesk bot that actually works, chatbot drops context after a few turns, Intercom bot alternative, Zendesk AI alternative, build WhatsApp bot, Slack bot with AI, chatbot escalation to human agent, LangChain chatbot but simpler, chatbot for SaaS onboarding flow.
23ai-improving-accuracy
Measure and improve how well your AI works. Use when AI gives wrong answers, accuracy is bad, responses are unreliable, you need to test AI quality, evaluate your AI, write metrics, benchmark performance, optimize prompts, improve results, or systematically make your AI better. Also used for spent hours tweaking prompts, trial and error prompt engineering is not working, quality plateaued early, stale prompts everywhere in your codebase, my AI is only 60% accurate, how to measure AI quality, AI evaluation framework, benchmark my LLM, prompt optimization not working, systematic way to improve AI, AI accuracy plateaued, DSPy optimizer tutorial, MIPROv2 optimization, how to go from 70% to 90% accuracy.
22ai-parsing-data
Pull structured data from messy text using AI. Use when parsing invoices, extracting fields from emails, scraping entities from articles, converting unstructured text to JSON, extracting contact info, parsing resumes, reading forms, pulling data from transcripts (VTT, LiveKit, Recall), extracting fields from Langfuse traces, or any task where messy text goes in and clean structured data comes out. Also use when emails are messy and lack structure, or structured data extraction from unstructured content is unreliable., extract entities from text, parse PDF with AI, structured extraction from unstructured text, OCR plus AI extraction, convert email to structured data, pull fields from documents automatically, AI data entry automation, invoice parsing, resume parsing with AI, medical record extraction.
22