aiconfig-online-evals

Installation
SKILL.md

AI Config Online Evaluations

Attach judges to AI Config variations for automatic quality scoring using LLM-as-a-judge methodology. Judges evaluate responses and return scores between 0.0 and 1.0.

Prerequisites

  • LaunchDarkly account with AI Configs enabled
  • API access token with write permissions
  • Existing AI Config with variations (use aiconfig-create skill)
  • For automatic metric recording and the consolidated judge-result API: Python AI SDK v0.18.0+ or Node.js AI SDK v0.17.0+

API Key Detection

  1. Check environment variables - LAUNCHDARKLY_API_KEY, LAUNCHDARKLY_API_TOKEN, LD_API_KEY
  2. Check MCP config - Claude: ~/.claude/config.json -> mcpServers.launchdarkly.env.LAUNCHDARKLY_API_KEY
  3. Prompt user - Only if detection fails

Core Concepts

Related skills

More from launchdarkly/agent-skills

Installs
620
GitHub Stars
10
First Seen
Mar 26, 2026