skill-validator
Fail
Audited by Gen Agent Trust Hub on Mar 15, 2026
Risk Level: HIGHCREDENTIALS_UNSAFEPROMPT_INJECTIONCOMMAND_EXECUTIONDATA_EXFILTRATION
Full Analysis
- [CREDENTIALS_UNSAFE]: The documentation file
scripts/SECURITY_AUDIT_GUIDE.mdcontains hardcoded strings matching sensitive credential patterns, including an OpenAI API key (sk-1234567890) and a GitHub token (ghp_1234567890abcdef). While these are intended as examples for the security scanner, they trigger high-priority alerts for exposed secrets. - [PROMPT_INJECTION]: Both
scripts/security_audit.pyandscripts/SECURITY_AUDIT_GUIDE.mdutilize numerous instruction-override and bypass keywords, such as[SYSTEM:,BYPASS,IGNORE,OVERRIDE, andSKIP VALIDATION. These strings are part of the security scanning logic but pose a risk of confusing the agent's behavioral constraints. - [PROMPT_INJECTION]: The skill is vulnerable to indirect prompt injection due to its processing of untrusted external skill files.
- Ingestion points: External skill files are read and processed by
scripts/security_audit.pyas defined in Step 11 ofSKILL.md. - Boundary markers: The validation process lacks explicit delimiters or instructions to ignore embedded commands within the files being audited.
- Capability inventory: The skill executes a local Python script (
scripts/security_audit.py) and performs file system read operations. - Sanitization: There is no evidence of content sanitization or validation for the audited files before they are read into the agent context.
- [COMMAND_EXECUTION]: The skill instructions (Step 11 and 13 in
SKILL.md) direct the agent to execute a local Python script,scripts/security_audit.py, which is used to perform file system operations for skill validation. - [DATA_EXFILTRATION]: The
scripts/security_audit.pyscript performs read operations on local file paths. If the script is directed to read sensitive system files, it could lead to unauthorized data exposure.
Recommendations
- AI detected serious security threats
Audit Metadata