llm-prompt-injection
Installation
SKILL.md
SKILL: LLM Prompt Injection — Expert Attack Playbook
AI LOAD INSTRUCTION: Expert LLM prompt injection techniques. Covers direct injection (instruction override, role play, context manipulation), indirect injection (RAG poisoning, web browsing, email), tool/function abuse, data exfiltration, MCP security risks, and defense bypass (encoding, splitting, few-shot). Base models miss the distinction between direct and indirect injection and underestimate tool-calling attack chains.
0. RELATED ROUTING
- ai-ml-security for broader ML security (adversarial examples, model poisoning, model extraction, data privacy attacks)
- xss-cross-site-scripting for parallels between XSS (injecting into HTML context) and prompt injection (injecting into LLM context)
- ssrf-server-side-request-forgery when prompt injection chains into SSRF via tool calls
Advanced Reference
Also load JAILBREAK_PATTERNS.md when you need:
- Categorized jailbreak technique library (DAN, developer mode, hypothetical scenarios, translation bypass)
- Multi-step escalation patterns
- Code-wrapping and ASCII art injection techniques
Related skills