onboard
Coval Onboarding
Guide the user through setting up a complete AI evaluation from scratch using the coval CLI. Follow the phases below in order, asking questions at each step.
If $ARGUMENTS contains a use case (e.g. "insurance_claims", "customer_support"), skip the use case question in Phase 2.
Phase 0: Setup + Preflight
Step 1: Check CLI installation
coval --version
If the command fails or is not found, guide the user to install it based on their OS:
macOS (Homebrew — recommended):
brew install coval-ai/tap/coval
More from coval-ai/coval-external-skills
launch-run
Launch a Coval evaluation run against an AI agent. Use when user wants to start an evaluation, test an agent, or run simulations.
13coval-resources
Comprehensive overview of ALL Coval platform resources, their hierarchy, relationships, API endpoints, and ID formats. Use when user asks about Coval resources, data model, how things relate, what endpoints exist, or needs context about the platform structure before making API calls.
13quick-eval
Full evaluation workflow - launch a run, watch progress, and summarize results. Use for end-to-end agent testing.
13download-audio
Download audio recordings from Coval voice simulations. Use when user wants to listen to or analyze call recordings.
13watch-run
Monitor a Coval run's progress with live updates. Use when user wants to check run status or wait for completion.
13review-llm-annotations-and-improve-prompt
Calculate agreement between human ground truth and machine labels for a text LLM judge metric, then analyze transcripts and reviewer notes to propose an improved metric prompt. One metric at a time.
13