ctf-ai-ml
Audited by Socket on Apr 22, 2026
2 alerts found:
AnomalySecurityThis code is clearly offensive CTF material: it constructs and sends many LLM-attack payloads (prompt injection, jailbreaks, token-smuggling via zero-width chars, context exhaustion, and tool-use exploitation attempts) to a chat API endpoint and prints responses while searching for flags. It does not directly execute system commands or read local secrets itself, but it strongly aims to coerce a remote LLM/agent into privileged actions. As a supply-chain dependency, it would be highly suspicious if used outside a controlled CTF setting; however, within the fragment shown it is not implementing malware locally—its threat is inducing harmful behavior on the target service.
SUSPICIOUS: the skill is internally consistent for a CTF offensive-AI purpose and does not show credential theft or third-party exfiltration, but it is a high-risk offensive security skill for AI agents. Main risks are exploit enablement against AI systems, unsafe loading of untrusted model files, and ordinary unpinned package-install supply-chain exposure.