security-audit

Fail

Audited by Gen Agent Trust Hub on May 15, 2026

Risk Level: CRITICALCOMMAND_EXECUTIONPROMPT_INJECTIONDATA_EXFILTRATION
Full Analysis
  • [COMMAND_EXECUTION]: The skill includes a robust set of shell scripts (scripts/scanners/*.sh) designed to automate security audits across multiple ecosystems. These scripts leverage standard system utilities like grep, awk, jq, and the GitHub gh CLI to identify vulnerable patterns and misconfigurations in local source code and repository settings.
  • [PROMPT_INJECTION]: The file references/llm-security.md documents various prompt injection techniques and provides detailed instructions for AI agents on how to defend against them. The instructions explicitly direct the agent to treat external content as untrusted data and ignore any directives embedded within user-provided files, representing a significant security hardening feature for the agent itself.
  • [DATA_EXFILTRATION]: Multiple documentation files in the references/ directory contain example patterns for sensitive data, such as AWS access keys (AKIAIOSFODNN7EXAMPLE) and JWT headers. These are utilized as reference templates to help the model identify hardcoded credentials in audited codebases. The skill does not contain any logic for exfiltrating sensitive files or credentials to external domains.
  • [COMMAND_EXECUTION]: The scripts/github-security-audit.sh script uses the gh CLI to programmatically query the GitHub API. This is used to verify critical security settings such as branch protection rules, secret scanning status, and the presence of a security policy, ensuring the target repository adheres to organizational security standards.
Recommendations
  • CRITICAL: 1 infected file(s) detected - DO NOT USE
Audit Metadata
Risk Level
CRITICAL
Analyzed
May 15, 2026, 03:52 PM