netra-simulation-setup
Netra Simulation Setup
Use this skill to design realistic multi-turn simulation datasets and evaluate conversational agent behavior.
When To Use
- You need to test multi-turn behavior, not only single-turn outputs.
- You want to compare agent performance across user personas.
- You need pre-production validation for goal achievement and factual correctness.
Simulation Building Blocks
- Evaluators: Session-level scoring of conversation quality and outcomes.
- Multi-turn datasets: scenario name, scenario, persona, max turns, user data, and facts.
- Test runs: Conversation transcript + evaluator results + trace links.
More from keyvaluesoftwaresystems/netra-skills
netra-best-practices
Code-first Netra best-practices playbook covering setup, instrumentation, context tracking, custom spans/metrics, integration patterns, evaluation, simulation, and troubleshooting.
34netra-mcp-usage
Netra MCP trace-debugging workflow focused on query_traces, get_trace_by_id, and get_session_details, including exact input parameters, filter schema, operators, sorting, and pagination patterns.
19netra-evaluation-setup
Set up high-quality Netra evaluations with datasets, evaluator design, variable mapping, and repeatable test runs. Use for regression detection and quality benchmarking.
4netra-decorator-instrumentation
Create custom Netra tracing instrumentation using decorators. Use when choosing between auto-instrumentation, decorators, and manual tracing in Python or TypeScript, with clear semantic span design.
3netra-setup
Install and initialize the Netra SDK with environment-safe defaults, instrument selection, and shutdown handling.
2netra-context-tracking
Implement request/session/user/tenant context tracking and conversation logging with Netra.
2