prompt-engeneering
Pass
Audited by Gen Agent Trust Hub on Apr 2, 2026
Risk Level: SAFE
Full Analysis
- [PROMPT_INJECTION]: The reference files (specifically references/failure-taxonomy.md and references/prompting-risks.md) contain explicit examples of adversarial prompts such as the DAN jailbreak and "ignore previous instructions" commands. These are clearly marked as "Minimal Reproducible Prompts" for educational taxonomy and auditing reference. \n- [PROMPT_INJECTION]: The skill processes untrusted user data (prompts provided for audit or improvement), creating an indirect prompt injection surface. This is mitigated by the skill's primary focus on teaching security best practices. \n
- Ingestion points: User-supplied prompts for improvement or audit (README.md, SKILL.md). \n
- Boundary markers: The skill recommends using XML tags and structural delimiters to separate instructions from data (SKILL.md). \n
- Capability inventory: The skill provides instructions on tool orchestration and agentic logic but does not contain tools that execute code or access the network itself. \n
- Sanitization: The reference library includes comprehensive guidance on input sanitization, output validation, and system prompt hardening (references/mistakes-security.md). \n- [COMMAND_EXECUTION]: The README.md file contains a standard installation command using the
npxpackage manager to fetch and install the skill from a public registry.
Audit Metadata