momentic-result-classification
Momentic result classification (MCP)
Momentic is an end-to-end testing framework where each test is composed of browser interaction steps. Each step combines Momentic-specific behavior (AI checks, natural-language locators, ai actions, etc.) with Playwright capabilities wrapped in our YAML step schema. When these tests are run, they produce results data that can be used to analyze the outcome of the test. The results data contains metadata about the run as well as any assets generated by the run (e.g. screenshots, logs, network requests, video recordings, etc.). Your job is to use these test results to classify failures that occurred in Momentic test runs.
Instructions
- Given a failing test run, analyze why the test run failed. Often you'll need to look beyond the current run to understand this, looking at past runs of the same test, or other context provided by the Momentic MCP tools
- After analyzing why the run failed, bucket the failure into one of the below categories, explaining the reasoning for choosing the specific category.
Helpful MCP tools
momentic_get_run — Returns some metadata about the run and the path to the full run results. Use the metadata to help you parse through the run results (e.g. which attempt to look at, which step failed, etc.)
momentic_list_runs — Recent runs for a test so you can compare the result of past runs over time. Always pass gitBranchName when it exists on the run in question so that it's more likely you're looking at the same version of the test.