llm-prompt-injection

Installation
SKILL.md

SKILL: LLM Prompt Injection — Expert Attack Playbook

AI LOAD INSTRUCTION: Expert LLM prompt injection techniques. Covers direct injection (instruction override, role play, context manipulation), indirect injection (RAG poisoning, web browsing, email), tool/function abuse, data exfiltration, MCP security risks, and defense bypass (encoding, splitting, few-shot). Base models miss the distinction between direct and indirect injection and underestimate tool-calling attack chains.

0. RELATED ROUTING

Advanced Reference

Also load JAILBREAK_PATTERNS.md when you need:

  • Categorized jailbreak technique library (DAN, developer mode, hypothetical scenarios, translation bypass)
  • Multi-step escalation patterns
  • Code-wrapping and ASCII art injection techniques

Related skills
Installs
476
GitHub Stars
628
First Seen
Apr 9, 2026