prompt-jailbreak

Fail

Audited by Socket on May 4, 2026

2 alerts found:

SecurityMalware
SecurityMEDIUM
SKILL.md

SUSPICIOUS. The skill is internally consistent with its stated purpose, but that purpose is explicitly offensive: teaching an AI agent how to bypass LLM safeguards, override system instructions, and evade filters. There are no installs, credential grabs, or exfiltration flows, so this is not confirmed malware; however, it is a high-risk offensive capability skill.

Confidence: 94%Severity: 81%
MalwareHIGH
references/bypass-templates.md

This artifact is explicitly designed to function as a jailbreak/prompt-injection template. It attempts to override safety controls, extract internal prompts/configurations/secrets, and produce actionable exploitation and tool-misuse guidance (SQL injection automation, SSRF/RCE-style steps, and file/network/code actions). While it is not a conventional supply-chain code module, it represents high risk adversarial content if distributed inside any package, dataset, README, or agent prompt set that could be consumed by AI tooling.

Confidence: 78%Severity: 88%
Audit Metadata
Analyzed At
May 4, 2026, 08:17 AM
Package URL
pkg:socket/skills-sh/wgpsec%2FAboutSecurity%2Fprompt-jailbreak%2F@767394933bc68131d77a0352cfbdcf61229755f3