llm-annotation-guide
Pass
Audited by Gen Agent Trust Hub on Apr 16, 2026
Risk Level: SAFE
Full Analysis
- [SAFE]: The skill operates entirely within its stated purpose of providing instructional guidance for LLM output annotation. It does not contain any suspicious code, unauthorized network requests, or attempts to access sensitive system files.
- [PROMPT_INJECTION]: The skill processes untrusted user data (LLM traces and annotation samples) which creates an inherent surface for indirect prompt injection. This is a functional requirement for the skill's data analysis tasks and does not involve elevated permissions or dangerous capabilities.
- Ingestion points: User-provided dataset files and existing annotation samples accessed during analysis (SKILL.md).
- Boundary markers: Absent; instructions do not specify the use of delimiters for the ingested data.
- Capability inventory: Reading file content and generating textual feedback or reports.
- Sanitization: None; the skill evaluates data content to provide quality assessments.
Audit Metadata