security-threat-model

Warn

Audited by Gen Agent Trust Hub on Apr 1, 2026

Risk Level: MEDIUMCOMMAND_EXECUTIONDATA_EXFILTRATIONPROMPT_INJECTION
Full Analysis
  • [COMMAND_EXECUTION]: The skill relies on several local Python scripts (e.g., scripts/scan-threat-surface.py, scripts/scan-supply-chain.py, scripts/sanitize-learning-db.py) executed via the Bash tool. The source code for these scripts is not part of the skill package, which prevents a full security verification of the operations they perform.
  • [DATA_EXFILTRATION]: The skill explicitly targets and reads highly sensitive system locations including ~/.claude/settings.json, ~/.ssh/, ~/.aws/, and various .env files. While intended for security auditing, this broad access to credentials and configurations poses a significant data exposure risk if the skill or its scripts were to be subverted.
  • [PROMPT_INJECTION]: The skill is vulnerable to indirect prompt injection during Phase 5 (Threat Model Synthesis).
  • Ingestion points: Data from security/surface-report.json, security/supply-chain-findings.json, and security/learning-db-report.json is loaded directly into the LLM context in Phase 5.
  • Boundary markers: No delimiters or instructions are provided to the model to distinguish between legitimate audit findings and potentially malicious instructions embedded within the data being scanned.
  • Capability inventory: The skill utilizes Read, Write, Bash, Grep, Glob, and Edit tools as defined in the YAML frontmatter.
  • Sanitization: There is no evidence of sanitization, escaping, or filtering of findings before they are processed by the LLM for the final report.
Audit Metadata
Risk Level
MEDIUM
Analyzed
Apr 1, 2026, 05:55 AM