ai-switching-models

Installation
SKILL.md

Switch Models Without Breaking Things

Guide the user through switching AI models or providers safely. The key insight: optimized prompts don't transfer between models (arxiv 2402.10949v2 — "The Unreasonable Effectiveness of Eccentric Automatic Prompts"). DSPy solves this by separating your task definition (signatures + modules) from model-specific prompts (compiled by optimizers).

Why switching models breaks things

Hand-tuned prompts are model-specific. A prompt engineered for GPT-4o will perform differently on Claude, Llama, or even GPT-4o-mini. Research shows optimized prompts for one model can actually hurt performance on another.

DSPy makes switching safe because:

  • Signatures define what the task is (inputs, outputs, types) — model-independent
  • Modules define how to solve it (chain of thought, ReAct, etc.) — model-independent
  • Compiled prompts (few-shot examples, instructions) are model-specific — but re-generated automatically by optimizers

The workflow: keep your program the same, swap the model, re-optimize. Done.

Step 1: Understand the situation

Ask the user:

  1. What model are you using now, and what do you want to switch to? (e.g., GPT-4o to Claude, cloud to local)
Related skills

More from lebsral/dspy-programming-not-prompting-lms-skills

Installs
57
GitHub Stars
5
First Seen
Feb 8, 2026