phoenix-typescript
Phoenix TypeScript Conventions
These conventions apply to all TypeScript in the Phoenix monorepo — the app/ frontend, the js/packages/ libraries (phoenix-client, phoenix-cli, phoenix-evals, phoenix-mcp, phoenix-otel, phoenix-config), examples, and benchmarks.
Before writing new code, explore the directory you're working in to understand existing patterns — then follow these rules.
Naming
Self-documenting names eliminate mental parsing for the next reader.
- Variables must not use single letters — even loop counters benefit from
index,row,char. - Complex conditions should be extracted into named booleans so code reads as prose.
- Booleans must use verb prefixes:
isAllowed,hasError,canSubmit— notallowed,error. - Function names must start with an action verb that describes what the function does:
getUser,normalizeTimestamp,logEvent,parseResponse,buildQuery— notuser(),timestamp(),event().
// Bad — single letters and ambiguous names
for (let i = 0; i < s.length; i++) {
const d = s[i].ts - s[i - 1]?.ts;
More from arize-ai/phoenix
phoenix-cli
Debug LLM applications using the Phoenix CLI. Fetch traces, analyze errors, structure trace review with open coding and axial coding, inspect datasets, review experiments, query annotation configs, and use the GraphQL API. Use whenever the user is analyzing traces or spans, investigating LLM/agent failures, deciding what to do after instrumenting an app, building failure taxonomies, choosing what evals to write, or asking "what's going wrong", "what kinds of mistakes", or "where do I focus" — even without naming a technique.
496phoenix-tracing
OpenInference semantic conventions and instrumentation for Phoenix AI observability. Use when implementing LLM tracing, creating custom spans, or deploying to production.
489phoenix-evals
Build and run evaluators for AI/LLM applications using Phoenix.
433agent-browser
Browser automation CLI for AI agents. Use when the user needs to interact with websites, including navigating pages, filling forms, clicking buttons, taking screenshots, extracting data, testing web apps, or automating any browser task. Triggers include requests to "open a website", "fill out a form", "click a button", "take a screenshot", "scrape data from a page", "test this web app", "login to a site", "automate browser actions", or any task requiring programmatic web interaction.
65vercel-react-best-practices
React and Next.js performance optimization guidelines from Vercel Engineering. This skill should be used when writing, reviewing, or refactoring React/Next.js code to ensure optimal performance patterns. Triggers on tasks involving React components, Next.js pages, data fetching, bundle optimization, or performance improvements.
63phoenix-skill-development
Develop, refine, and maintain skills in the skills/ directory. Use when creating a new skill, updating an existing skill, adding rule files, or improving skill quality and consistency.
59