code-review
Warn
Audited by Gen Agent Trust Hub on Apr 1, 2026
Risk Level: MEDIUMCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- [COMMAND_EXECUTION]: The
run.pyscript is vulnerable to shell command injection. It constructs shell commands by interpolating variablesbaseandpathsdirectly into an f-string which is then executed viasubprocess.run(cmd, shell=True). These variables are obtained from the CLI arguments without any sanitization or validation, allowing an attacker to execute arbitrary commands by injecting shell metacharacters into the branch name or file paths. - [PROMPT_INJECTION]: The skill is susceptible to indirect prompt injection because it reads external, potentially untrusted code and git diffs and presents them to the LLM for review without proper isolation.
- Ingestion points:
run.pycollects data viagit diffand reads the full text of changed files (usingPath.read_text). - Boundary markers: The
review_promptinrun.pydoes not utilize specific delimiters (like XML tags or triple-backticks with ignore-instructions) to isolate the untrusted code content from the instructions. - Capability inventory: The skill utilizes the
bashtool and has access to the local file system for reading and executing commands. - Sanitization: No sanitization, escaping, or content filtering is performed on the ingested code or diff data before it is interpolated into the prompt provided to the AI agent.
Audit Metadata