feature-usage-feed

Installation
SKILL.md

Building a feature usage feed via LLM evals

Some PostHog features (group session summaries, single session summaries, replay AI search, error tracking AI debug, etc.) generate hundreds or thousands of LLM traces per week. Reading them by hand is not feasible. This skill covers the end-to-end pattern for turning that trace volume into a live Slack feed of canonical use cases — what users are actually doing with the feature.

The workflow is mixed, and leans UI. Trace inspection and filter discovery (steps 1-2) are MCP-driven. Eval creation, dry-running, and enabling (steps 4-5) are MCP-driven when posthog:llma-evaluation-* tools are exposed to your agent — but they often aren't, in which case fall back to the UI (Data pipeline → destinations for the alert is always UI). Each step flags its UI fallback. Expect to finish in the UI even when you start from chat.

When to use

  • "How are people actually using [feature X] in production?"
  • "Can we identify the canonical use cases for [feature X] so we can write better docs / prioritize improvements?"
  • "I want a Slack feed of representative usage examples without manually skimming traces."
  • "Set up a feed of use cases for [feature X] in #team-[area]-usage."

If the user just wants to debug a single trace or tune an existing eval, redirect to exploring-llm-traces or exploring-llm-evaluations instead.

Two filter patterns

This skill supports two different ways to scope an eval to "the feature you care about":

Related skills
Installs
33
Repository
posthog/skills
GitHub Stars
34
First Seen
Apr 24, 2026