test-coverage-improver
Test Coverage Improver
Overview
Use this skill whenever coverage needs assessment or improvement (coverage regressions, failing thresholds, or user requests for stronger tests). It runs the coverage suite, analyzes results, highlights the biggest gaps, and prepares test additions while confirming with the user before changing code.
Quick Start
- From the repo root run
make coverageto regenerate.coveragedata andcoverage.xml. - Collect artifacts:
.coverageandcoverage.xml, plus the console output fromcoverage report -mfor drill-downs. - Summarize coverage: total percentages, lowest files, and uncovered lines/paths.
- Draft test ideas per file: scenario, behavior under test, expected outcome, and likely coverage gain.
- Ask the user for approval to implement the proposed tests; pause until they agree.
- After approval, write the tests in
tests/, rerunmake coverage, and then run$code-change-verificationbefore marking work complete.
Workflow Details
- Run coverage: Execute
make coverageat repo root. Avoid watch flags and keep prior coverage artifacts only if comparing trends. - Parse summaries efficiently:
More from openai/openai-agents-python
openai-knowledge
Use when working with the OpenAI API (Responses API) or OpenAI platform features (tools, streaming, Realtime API, auth, models, rate limits, MCP) and you need authoritative, up-to-date documentation (schemas, examples, limits, edge cases). Prefer the OpenAI Developer Documentation MCP server tools when available; otherwise guide the user to enable `openaiDeveloperDocs`.
308pr-draft-summary
Create the required PR-ready summary block, branch suggestion, title, and draft description for openai-agents-python. Use in the final handoff after moderate-or-larger changes to runtime code, tests, examples, build/test configuration, or docs with behavior impact; skip only for trivial or conversation-only tasks, repo-meta/doc-only tasks without behavior impact, or when the user explicitly says not to include the PR draft block.
159docs-sync
Analyze main branch implementation and configuration to find missing, incorrect, or outdated documentation in docs/. Use when asked to audit doc coverage, sync docs with code, or propose doc updates/structure changes. Only update English docs under docs/** and never touch translated docs under docs/ja, docs/ko, or docs/zh. Provide a report and ask for approval before editing docs.
114code-change-verification
Run the mandatory verification stack when changes affect runtime code, tests, or build/test behavior in the OpenAI Agents Python repository.
106final-release-review
Perform a release-readiness review by locating the previous release tag from remote tags and auditing the diff (e.g., v1.2.3...<commit>) for breaking changes, regressions, improvement opportunities, and risks before releasing openai-agents-python.
98examples-auto-run
Run python examples in auto mode with logging, rerun helpers, and background control.
90