llm-security

Installation
SKILL.md

LLM Security Testing

This skill enables comprehensive security testing of Large Language Model applications and AI systems, covering prompt injection, jailbreaking, data poisoning, model extraction, and AI-specific vulnerabilities based on the OWASP Top 10 for LLM Applications.

When to Use This Skill

This skill should be invoked when:

  • Testing LLM applications for prompt injection vulnerabilities
  • Attempting to bypass AI guardrails and safety measures
  • Assessing RAG (Retrieval Augmented Generation) pipeline security
  • Testing AI agent systems for control flow vulnerabilities
  • Evaluating AI model API security
  • Reviewing AI application architectures for security issues

Trigger Phrases

  • "test this LLM for prompt injection"
  • "jailbreak the AI system"
  • "test AI guardrails"
  • "assess RAG security"
Related skills

More from hardw00t/ai-security-arsenal

Installs
5
GitHub Stars
39
First Seen
Feb 2, 2026