ai-tracing-requests

Installation
SKILL.md

See What Your AI Did on a Specific Request

Guide the user through tracing and debugging individual AI requests. The goal: for any request, see every LM call, retrieval step, intermediate result, token count, and latency.

How tracing differs from monitoring

Monitoring (/ai-monitoring) Tracing (this skill)
Scope Aggregate health across all requests Single request, full detail
Question answered "Is accuracy dropping this week?" "Why did customer #12345 get a wrong answer at 2:14pm?"
Output Scores, trends, alerts Call traces, intermediate results, latencies
Timing Periodic batch evaluation Per-request, real-time

Step 1: Understand the situation

Ask the user:

  1. What happened? A specific wrong answer, slow response, or unexpected behavior?
  2. What does your pipeline look like? Single module or multi-step pipeline? Which DSPy modules?
  3. Where is this running? Local development, staging, or production?
Related skills

More from lebsral/dspy-programming-not-prompting-lms-skills

Installs
19
GitHub Stars
5
First Seen
Feb 8, 2026