aiconfig-ai-metrics

Installation
SKILL.md

AI Metrics Instrumentation

You're using a skill that wires LaunchDarkly AI metrics around an existing provider call. Your job is to audit what's already there, pick the right tier from the ladder below, and implement it with the least ceremony that still captures the metrics the Monitoring tab needs (duration, input/output tokens, success/error, plus TTFT when streaming).

The single most important thing to get right: default to the highest tier that fits the shape of the call. Going lower ("just write the manual tracker calls") looks flexible but costs you drift, missed metrics, and legacy patterns the SDKs have moved past.

The four-tier ladder

This is the order the official SDK READMEs (Python core, Node core, and every provider package) recommend. Walk from the top and stop at the first tier that fits:

Tier Pattern Use when Tracks automatically
1 — Managed runner Python: ai_client.create_model(...) returning a ManagedModel, then await model.invoke(...). Node: aiClient.initChat(...) / aiClient.createChat(...) returning a TrackedChat, then await chat.invoke(...). The call is conversational (chat history, turn-based). This is what the provider READMEs lead with. Duration, tokens, success/error — all of it, zero tracker calls.
2 — Provider package + trackMetricsOf tracker.trackMetricsOf(Provider.getAIMetricsFromResponse, () => providerCall()). Provider packages today: @launchdarkly/server-sdk-ai-openai, -langchain, -vercel (Node) and launchdarkly-server-sdk-ai-openai, -langchain (Python). The shape isn't a chat loop (one-shot completion, structured output, agent step) but the framework or provider has a package. Duration + success/error from the wrapper; tokens from the package's built-in getAIMetricsFromResponse extractor.
3 — Custom extractor + trackMetricsOf Same trackMetricsOf wrapper, but you write a small function that maps the provider response to LDAIMetrics (tokens + success). No provider package exists (Anthropic direct, Gemini, Cohere, custom HTTP). Duration + success/error from the wrapper; tokens from your extractor.
4 — Raw manual Separate calls to trackDuration, trackTokens, trackSuccess / trackError, plus trackTimeToFirstToken for streams. Streaming with TTFT, unusual response shapes, partial tracking, anything Tier 2–3 can't cleanly wrap. Only what you explicitly call — it's on you to not miss one.

A call to track_openai_metrics / trackOpenAIMetrics / track_bedrock_converse_metrics / trackBedrockConverseMetrics / trackVercelAISDKGenerateTextMetrics is Tier-2 legacy shorthand. These helpers still exist in the SDK source but none of the current provider READMEs use them — they've been superseded by trackMetricsOf + Provider.getAIMetricsFromResponse. Do not recommend them for new code; if you see them in an existing codebase, leave them alone unless the user is already on a cleanup pass.

Related skills

More from launchdarkly/agent-skills

Installs
492
GitHub Stars
10
First Seen
Apr 17, 2026