llm-attacks-security
Installation
SKILL.md
LLM Security Attacks
Scope
Use this skill when working on:
- Prompt injection attacks and defenses
- LLM jailbreaking techniques
- Training data extraction
- Model output manipulation
- AI safety bypasses
Common LLM Vulnerabilities (Cheat Sheet)
Related skills
More from gmh5225/awesome-ai-security
ai-powered-pentesting
Guide for AI-powered penetration testing tools, red teaming frameworks, and autonomous security agents.
52adversarial-machine-learning
Guide for adversarial machine learning: adversarial examples, data poisoning, model backdoors, and evasion attacks.
25ai-security-tooling
Guide for AI security tooling (detectors, analyzers, guardrails, benchmarks) and consistent placement in README.md.
22awesome-ai-security-overview
Guide for understanding and contributing to the awesome-ai-security curated resource list. Use this skill when adding resources, organizing categories, or maintaining README.md consistency (no duplicates).
21