apastra-validate
Apastra Validate
Validate PromptOps files against the apastra JSON schemas. Catches formatting errors, missing required fields, and invalid values before you run evaluations.
When to Use
Use this skill when you want to:
- Check that prompt specs, datasets, evaluators, suites, and quick eval files are correctly formatted
- Validate inline assertions on dataset cases
- Validate files after scaffolding or manual edits
- Debug why an evaluation run is failing
Validation Process
When asked to validate (e.g., "validate my promptops files"):
Step 1: Discover Files
Scan the promptops/ directory for files to validate:
More from bintzgavin/apastra
apastra
PromptOps skills for versioning, evaluating, and shipping AI prompts as disciplined software assets. Agent-as-harness — your IDE agent runs evals, compares baselines, and gates quality.
10apastra-scaffold
Generate new prompt specs, datasets, evaluators, and suites from templates. Creates correctly-formatted files that pass schema validation.
5apastra-baseline
Establish and manage evaluation baselines for regression detection. A baseline is a known-good scorecard that future runs are compared against.
5apastra-eval
Run prompt evaluations using your IDE agent as the harness. Load suites, execute test cases, score results, and compare against baselines.
5apastra-getting-started
Quick setup guide for apastra PromptOps. Create your first prompt spec, dataset, evaluator, and suite in 5 minutes.
5apastra-setup-ci
Upgrade from local-first evaluation to automated GitHub Actions CI. Installs workflows for PR gating, release promotion, and auto-merge.
4