aiconfig-migrate
Migrate to AI Configs
You're using a skill that will guide you through migrating an application from hardcoded LLM prompts to a full LaunchDarkly AI Configs implementation. Your job is to run the migration in five stages, stopping at each stage for the user to confirm:
- Audit the code — read-only scan that produces a structured list of everything hardcoded (prompt, model, parameters, tools, app-scoped knobs).
- Wrap the call — install the SDK, create the AI Config in LaunchDarkly with a fallback that mirrors the hardcoded values, and rewrite the call site to fetch the config fresh on every request.
- Move the tools — extract each tool's JSON schema, attach it to the AI Config, and swap every call site that references the old tool list.
- Add tracking — wire the per-request tracker (duration, tokens, success/error) around the provider call.
- Attach evaluators — either offline evals via the Playground + Datasets, or online judges that score sampled traffic automatically.
⚠️ Three first-run failure modes to avoid.
- Tracker in the wrong scope. For an agent with a loop, mint
create_tracker()once per user turn in asetup_runentry node — not insidecall_model. Per-iteration factory calls produce NrunIds and trip the at-most-once guards. See agent-mode-frameworks.md § CustomStateGraph.load_chat_modelwrapper reuse. Templates likelangchain-ai/react-agentship aload_chat_model(f"{provider}/{name}")helper that wrapsinit_chat_model(...)and silently drops every variation parameter. Delete it (don't just avoid using it) and replace call sites withcreate_langchain_model(ai_config).- Fallthrough not flipped after
/aiconfig-create. A freshly-created AI Config's fallthrough points at an auto-generated disabled variation, so the SDK returnsenabled=Falseuntil/aiconfig-targetingruns. Flip it before Stage 2 verification.
Coverage — which shapes are well-trodden vs require extrapolation
The skill is optimized for Python and Node.js / TypeScript; other languages are install-only. Within Python and Node the coverage tiers are:
More from launchdarkly/agent-skills
onboarding
Onboard a project to LaunchDarkly: kickoff roadmap, resumable log, explore repo, MCP, companion flag skills, nested SDK install (detect/plan/apply), first flag. Use when adding LaunchDarkly, setting up or integrating feature flags in a project, SDK integration, or 'onboard me'.
867launchdarkly-flag-discovery
Audit your LaunchDarkly feature flags to understand the landscape, find stale or launched flags, and assess removal readiness. Use when the user asks about flag debt, stale flags, cleanup candidates, flag health, or wants to understand their flag inventory.
815launchdarkly-flag-cleanup
Safely remove a feature flag from code while preserving production behavior. Use when the user wants to remove a flag from code, delete flag references, or create a PR that hardcodes the winning variation after a rollout is complete.
815launchdarkly-flag-create
Create and configure LaunchDarkly feature flags in a way that fits the existing codebase. Use when the user wants to create a new flag, wrap code in a flag, add a feature toggle, or set up an experiment. Guides exploration of existing patterns before creating.
797launchdarkly-flag-targeting
Control LaunchDarkly feature flag targeting including toggling flags on/off, percentage rollouts, targeting rules, individual targets, and copying flag configurations between environments. Use when the user wants to change who sees a flag, roll out to a percentage, add targeting rules, or promote config between environments.
795aiconfig-tools
Give your AI agents capabilities through tools (function calling). Helps you identify what your AI needs to do, create tool definitions, and attach them to AI Config variations.
747