netra-evaluation-setup
Netra Evaluation Setup
Use this skill to build reliable evaluation pipelines in Netra that catch regressions and measure quality over time.
When To Use
- You need repeatable quality checks for prompts, models, or agent logic.
- You want both subjective and deterministic scoring.
- You need a baseline before deploying AI changes.
Evaluation Design Framework
- Define quality dimensions.
- Build or import a dataset.
- Select evaluator types per dimension.
- Map variables carefully.
- Run test suites and inspect failures.
- Iterate prompt, policy, or tool logic.
More from keyvaluesoftwaresystems/netra-skills
netra-best-practices
Code-first Netra best-practices playbook covering setup, instrumentation, context tracking, custom spans/metrics, integration patterns, evaluation, simulation, and troubleshooting.
34netra-mcp-usage
Netra MCP trace-debugging workflow focused on query_traces, get_trace_by_id, and get_session_details, including exact input parameters, filter schema, operators, sorting, and pagination patterns.
19netra-simulation-setup
Set up Netra multi-turn simulations with scenario definitions, personas, fact checkers, evaluator configuration, and test-run analysis. Use to validate agent behavior before production.
4netra-decorator-instrumentation
Create custom Netra tracing instrumentation using decorators. Use when choosing between auto-instrumentation, decorators, and manual tracing in Python or TypeScript, with clear semantic span design.
3netra-setup
Install and initialize the Netra SDK with environment-safe defaults, instrument selection, and shutdown handling.
2netra-context-tracking
Implement request/session/user/tenant context tracking and conversation logging with Netra.
2