instrument-llm-analytics
Add PostHog LLM analytics
Use this skill to add PostHog LLM analytics that trace AI model usage in new or changed code. Use it after implementing LLM features or reviewing PRs to ensure all generations are captured with token counts, latency, and costs. If PostHog is not yet installed, this skill also covers initial SDK setup. Supports any provider or framework.
Supported providers: OpenAI, Azure OpenAI, Anthropic, Google, Cohere, Mistral, Perplexity, DeepSeek, Groq, Together AI, Fireworks AI, xAI, Cerebras, Hugging Face, Ollama, OpenRouter.
Supported frameworks: LangChain, LlamaIndex, CrewAI, AutoGen, DSPy, LangGraph, Pydantic AI, Vercel AI, LiteLLM, Instructor, Semantic Kernel, Mirascope, Mastra, SmolAgents, OpenAI Agents.
Proxy/gateway: Portkey, Helicone.
Instructions
Follow these steps IN ORDER:
STEP 1: Analyze the codebase and detect the LLM stack.
- Look for LLM provider SDKs (openai, anthropic, google-generativeai, etc.) and AI frameworks (langchain, llamaindex, crewai, etc.) in dependency files and imports.
- Look for lockfiles to determine the package manager.
- Check for existing PostHog or observability setup. If PostHog is already installed and LLM tracing is configured, skip to STEP 4 to add tracing for any new LLM calls.
More from posthog/skills
posthog-debugger
Debug and inspect PostHog implementations on any website. Use this skill when a user wants to understand how PostHog is implemented on a page, troubleshoot tracking issues, verify configuration, check what events are being sent, or audit a PostHog setup. Works with Chrome DevTools MCP and Playwright MCP to inspect live websites.
106integration-nextjs-app-router
PostHog integration for Next.js App Router applications
100instrument-product-analytics
>-
90feature-flags-nextjs
PostHog feature flags for Next.js applications
88instrument-feature-flags
>-
86error-tracking-nextjs
PostHog error tracking for Next.js
85