sentry-setup-ai-monitoring

Installation
Summary

Automatically detect and configure Sentry monitoring for LLM calls, agents, and AI SDKs.

  • Auto-detects installed AI packages (OpenAI, Anthropic, LangChain, Google GenAI, Vercel AI, Pydantic AI, and others) and enables appropriate integrations with zero manual registration in Python
  • Requires tracing enabled (tracesSampleRate > 0) and supports manual span instrumentation via gen_ai.* operation types for unsupported SDKs
  • Captures model, token counts, and latency by default; prompt and output recording is opt-in only and requires explicit user confirmation due to PII sensitivity
  • Provides JavaScript and Python configuration examples, including browser-side manual wrapping for Next.js and per-call telemetry setup for Vercel AI Edge runtime
SKILL.md

Setup Sentry AI Agent Monitoring

Configure Sentry to track LLM calls, agent executions, tool usage, and token consumption.

Invoke This Skill When

  • User asks to "monitor AI/LLM calls" or "track OpenAI/Anthropic usage"
  • User wants "AI observability" or "agent monitoring"
  • User asks about token usage, model latency, or AI costs

Important: The SDK versions, API names, and code samples below are examples. Always verify against docs.sentry.io before implementing, as APIs and minimum versions may have changed.

Prerequisites

AI monitoring requires tracing enabled (tracesSampleRate > 0).

Data Capture Warning

Prompt and output recording captures user content that is likely PII. Before enabling recordInputs/recordOutputs (JS) or include_prompts/send_default_pii (Python), confirm:

Related skills

More from getsentry/sentry-agent-skills

Installs
519
GitHub Stars
19
First Seen
Jan 20, 2026