skills/notque/claude-code-toolkit/do/Gen Agent Trust Hub

do

Fail

Audited by Gen Agent Trust Hub on Apr 30, 2026

Risk Level: HIGHCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
  • [COMMAND_EXECUTION]: The skill's primary instruction file, SKILL.md, directs the agent to execute local Python scripts using unvalidated user input as arguments. Specifically, Phase 2, Step 1 uses python3 scripts/index-router.py --request "{user_request}" --json, and other steps use similar patterns for learning-db.py. This direct interpolation of untrusted strings into shell commands presents a significant risk of command injection.
  • [COMMAND_EXECUTION]: In references/parallel-analysis.md, the skill constructs file system paths for reading and writing using user-provided arguments without validation. Phase 1, Step 2 executes ls agents/{target_name}.md, and Phase 3, Step 5 writes to skills/do/artifacts/synthesis-{target}-{date}.md. These patterns are vulnerable to path traversal and argument injection attacks if an attacker provides a malicious target name.
  • [PROMPT_INJECTION]: The "Parallel Multi-Perspective Analysis" workflow in references/parallel-analysis.md involves reading external "source material" and passing it into the prompts of multiple sub-agents. The prompt templates in references/perspective-prompts.md lack robust boundary markers or instructions for the sub-agents to ignore instructions embedded within the source material. This creates a surface for indirect prompt injection where adversarial content in documents could hijack sub-agent execution.
  • [COMMAND_EXECUTION]: The skill relies on several local scripts (index-router.py, learning-db.py, adr-query.py, feature-state.py, classify-repo.py) being present and secure, but it executes them with parameters derived directly from user requests, which is a high-risk pattern for any agentic workflow with shell access.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Apr 30, 2026, 12:34 PM