prompt-injection-guard
Installation
SKILL.md
Prompt Injection Guard
Protect AI applications from prompt injection and adversarial inputs.
When to Use
- Building user-facing AI applications
- Processing untrusted input with LLMs
- Implementing AI security controls
- Preventing prompt manipulation attacks
- Meeting security compliance requirements
Attack Types
1. Direct Injection
User directly attempts to override system instructions.
Related skills