skills/garethmanning/education-agent-skills/ai-hallucination-fact-check-protocol/Gen Agent Trust Hub
ai-hallucination-fact-check-protocol
Pass
Audited by Gen Agent Trust Hub on May 13, 2026
Risk Level: SAFE
Full Analysis
- [SAFE]: The skill consists entirely of instructional text and metadata intended to guide an AI agent in generating educational protocols. It contains no executable scripts (Python, Node.js, Shell).
- [SAFE]: No sensitive file access or network operations were identified. The skill does not attempt to read system configuration files, environment variables, or credentials.
- [SAFE]: The instructions do not contain any prompt injection patterns designed to bypass safety guidelines or extract internal system prompts. The 'CRITICAL PRINCIPLES' section is used for pedagogical alignment, not behavior overriding.
- [SAFE]: No obfuscation techniques such as Base64, zero-width characters, or homoglyphs were detected.
- [SAFE]: Indirect Prompt Injection surface: The skill accepts user-provided content via the
ai_output_contextfield for processing. However, because the skill has no 'dangerous' capabilities (it cannot execute code, access the filesystem, or make network requests), this surface does not pose a security risk to the agent's environment. - Ingestion points: untrusted data enters via
ai_output_contextandstudent_levelinSKILL.md. - Boundary markers: The content is delimited by Markdown bold headers (e.g.,
**AI output context:**). - Capability inventory: No subprocess calls, file-write, or network operations are present in the skill.
- Sanitization: Standard template interpolation is used without additional escaping.
Audit Metadata