test-coverage-improver
Test Coverage Improver
Overview
Use this skill whenever coverage needs assessment or improvement (coverage regressions, failing thresholds, or user requests for stronger tests). It runs the coverage suite, analyzes results, highlights the biggest gaps, and prepares test additions while confirming with the user before changing code.
Quick Start
- From the repo root run
pnpm test:coverage(setCI=1if needed) to regeneratecoverage/. - Collect artifacts:
coverage/coverage-summary.json(preferred) orcoverage/coverage-final.json, pluscoverage/lcov.infoandcoverage/lcov-report/index.htmlfor drill-downs. - Summarize coverage: total percentages, lowest files, branches under 80%, and uncovered lines/paths.
- Draft test ideas per file: scenario, behavior under test, expected outcome, and likely coverage gain.
- Ask the user for approval to implement the proposed tests; pause until they agree.
- After approval, write the tests in the relevant package, rerun
pnpm test:coverage, and then run$code-change-verificationbefore marking work complete.
Workflow Details
- Run coverage: Execute
CI=1 pnpm test:coverageat repo root. Avoid watch flags and keep prior coverage artifacts only if comparing trends. - Parse summaries efficiently:
More from openai/openai-agents-js
pnpm-upgrade
Keep pnpm current: run pnpm self-update/corepack prepare, align packageManager in package.json, and bump pnpm/action-setup + pinned pnpm versions in .github/workflows to the latest release. Use this when refreshing the pnpm toolchain manually or in automation.
91docs-sync
Analyze main branch implementation and configuration to find missing, incorrect, or outdated documentation in docs/. Use when asked to audit doc coverage, sync docs with code, or propose doc updates/structure changes. Only update English docs (docs/src/content/docs/**) and never touch translated docs under docs/src/content/docs/ja, ko, or zh. Provide a report and ask for approval before editing docs.
81openai-knowledge
Use when working with the OpenAI API (Responses API) or OpenAI platform features (tools, streaming, Realtime API, auth, models, rate limits, MCP) and you need authoritative, up-to-date documentation (schemas, examples, limits, edge cases). Prefer the OpenAI Developer Documentation MCP server tools when available; otherwise guide the user to enable `openaiDeveloperDocs`.
81changeset-validation
Validate changesets in openai-agents-js using LLM judgment against git diffs (including uncommitted local changes). Use when packages/ or .changeset/ are modified, or when verifying PR changeset compliance and bump level.
79code-change-verification
Run the mandatory verification stack when changes affect runtime code, tests, or build/test behavior in the OpenAI Agents JS monorepo.
78integration-tests
Run the integration-tests pipeline that depends on a local npm registry (Verdaccio). Use when asked to execute integration tests or local publish workflows in this repo.
75