apastra-getting-started
Apastra Getting Started
Set up prompt versioning and evaluation in any project. No CI, no cloud, no framework — just files and your IDE agent.
What Is Apastra?
Apastra treats AI prompts as versioned software assets. Prompts, test cases, and scoring rules are files in your repo. Your IDE agent runs evaluations, compares results against baselines, and catches regressions — all locally.
Quick Setup
1. Create the promptops directory
mkdir -p promptops/prompts promptops/datasets promptops/evaluators promptops/suites promptops/schemas promptops/policies derived-index/baselines derived-index/regressions
2. Create your first prompt spec
Create promptops/prompts/summarize.yaml:
More from bintzgavin/apastra
apastra
PromptOps skills for versioning, evaluating, and shipping AI prompts as disciplined software assets. Agent-as-harness — your IDE agent runs evals, compares baselines, and gates quality.
10apastra-validate
Validate all promptops files against JSON schemas. Catch formatting errors before running evaluations.
5apastra-scaffold
Generate new prompt specs, datasets, evaluators, and suites from templates. Creates correctly-formatted files that pass schema validation.
5apastra-baseline
Establish and manage evaluation baselines for regression detection. A baseline is a known-good scorecard that future runs are compared against.
5apastra-eval
Run prompt evaluations using your IDE agent as the harness. Load suites, execute test cases, score results, and compare against baselines.
5apastra-setup-ci
Upgrade from local-first evaluation to automated GitHub Actions CI. Installs workflows for PR gating, release promotion, and auto-merge.
4