llm-security
Pass
Audited by Gen Agent Trust Hub on Apr 18, 2026
Risk Level: SAFE
Full Analysis
- [SAFE]: The skill functions as a static documentation and reference guide for LLM security best practices.
- [SAFE]: While the skill contains common prompt injection strings (e.g., 'Ignore all previous instructions'), these are explicitly labeled as examples of attacks within a defensive context to teach the agent or user how to identify and prevent them.
- [SAFE]: The provided PHP code snippets for input and output validation are templates for developer use and are not executed by the agent environment.
- [SAFE]: The skill correctly identifies and warns against high-risk anti-patterns such as using
eval(), piping raw LLM output to shells, or granting excessive permissions to agents.
Audit Metadata