ai-security-tooling
Installation
SKILL.md
AI Security Tooling
Scope
Use this skill when adding or organizing:
- LLM security tools (guardrails, detectors)
- Adversarial ML libraries
- AI vulnerability scanners
- Model safety tools
- Security benchmarks and frameworks
Tool Categories
LLM Security Tools
- Guardrails: NeMo Guardrails, LLM Guard, Rebuff
- Detectors: Vigil-LLM, Nova Framework, Garak
- Scanners: ModelScan, AI Security Analyzer
Related skills
More from gmh5225/awesome-ai-security
ai-powered-pentesting
Guide for AI-powered penetration testing tools, red teaming frameworks, and autonomous security agents.
52llm-attacks-security
Guide for LLM security attacks: prompt injection, jailbreaking, data extraction, and where to place resources in README.md.
36adversarial-machine-learning
Guide for adversarial machine learning: adversarial examples, data poisoning, model backdoors, and evasion attacks.
25awesome-ai-security-overview
Guide for understanding and contributing to the awesome-ai-security curated resource list. Use this skill when adding resources, organizing categories, or maintaining README.md consistency (no duplicates).
21