Testing

Installation
SKILL.md

Testing

This skill enables an AI agent to systematically generate, run, and evaluate tests for a given codebase. It covers the full testing lifecycle — from analyzing source code and identifying meaningful test cases, through writing and executing tests, to measuring coverage and recommending improvements. The agent supports unit tests, integration tests, and end-to-end tests across multiple languages and frameworks.

Workflow

  1. Analyze the source code. Read the target file or module and build a dependency graph of its functions, classes, and external interactions. Identify public interfaces, internal helpers, input parameters, return types, and side effects. This step determines what is testable and what kinds of tests are appropriate.

  2. Identify test cases. For each function or method, enumerate the scenarios that need coverage: happy-path inputs, boundary values, invalid or null inputs, exception paths, and state transitions. For integration points, identify the collaborators that need to be mocked or stubbed versus tested live. Prioritize cases by risk — complex branching logic and public API surfaces come first.

  3. Write the tests. Generate well-structured test code using the project's existing test framework (e.g., pytest, Jest, JUnit). Each test should have a descriptive name that states the scenario and expected outcome. Use the Arrange-Act-Assert pattern: set up preconditions, invoke the code under test, and assert the expected result. Add parameterized tests where a single logical case applies to multiple input sets.

  4. Run the tests. Execute the test suite using the appropriate runner command. Capture the full output including pass/fail status, assertion messages, and timing information. If any tests fail, parse the failure output to determine whether the failure indicates a bug in the source code or an error in the test itself.

  5. Analyze coverage. Run the test suite with coverage instrumentation enabled (e.g., pytest --cov, jest --coverage). Parse the coverage report to identify uncovered lines, branches, and functions. Flag any critical code paths — error handlers, security checks, data validation — that lack coverage.

  6. Suggest improvements. Based on coverage gaps and code complexity, recommend additional test cases. Suggest refactoring opportunities that would make the code more testable, such as extracting pure functions or introducing dependency injection. Provide a summary report with coverage percentages and a prioritized list of next actions.

Supported Languages

Related skills
Installs
11
GitHub Stars
78
First Seen
Mar 19, 2026