prompt-jailbreak
Audited by Socket on May 4, 2026
2 alerts found:
SecurityMalwareSUSPICIOUS. The skill is internally consistent with its stated purpose, but that purpose is explicitly offensive: teaching an AI agent how to bypass LLM safeguards, override system instructions, and evade filters. There are no installs, credential grabs, or exfiltration flows, so this is not confirmed malware; however, it is a high-risk offensive capability skill.
This artifact is explicitly designed to function as a jailbreak/prompt-injection template. It attempts to override safety controls, extract internal prompts/configurations/secrets, and produce actionable exploitation and tool-misuse guidance (SQL injection automation, SSRF/RCE-style steps, and file/network/code actions). While it is not a conventional supply-chain code module, it represents high risk adversarial content if distributed inside any package, dataset, README, or agent prompt set that could be consumed by AI tooling.