phoenix-observability

Installation
SKILL.md

Phoenix - AI Observability Platform

Open-source AI observability and evaluation platform for LLM applications with tracing, evaluation, datasets, experiments, and real-time monitoring.

When to use Phoenix

Use Phoenix when:

  • Debugging LLM application issues with detailed traces
  • Running systematic evaluations on datasets
  • Monitoring production LLM systems in real-time
  • Building experiment pipelines for prompt/model comparison
  • Self-hosted observability without vendor lock-in

Key features:

  • Tracing: OpenTelemetry-based trace collection for any LLM framework
  • Evaluation: LLM-as-judge evaluators for quality assessment
  • Datasets: Versioned test sets for regression testing
  • Experiments: Compare prompts, models, and configurations
  • Playground: Interactive prompt testing with multiple models
Related skills
Installs
5
GitHub Stars
5
First Seen
Mar 28, 2026