skills/cklxx/elephant.ai/code-review/Gen Agent Trust Hub

code-review

Warn

Audited by Gen Agent Trust Hub on Apr 1, 2026

Risk Level: MEDIUMCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
  • [COMMAND_EXECUTION]: The run.py script is vulnerable to shell command injection. It constructs shell commands by interpolating variables base and paths directly into an f-string which is then executed via subprocess.run(cmd, shell=True). These variables are obtained from the CLI arguments without any sanitization or validation, allowing an attacker to execute arbitrary commands by injecting shell metacharacters into the branch name or file paths.
  • [PROMPT_INJECTION]: The skill is susceptible to indirect prompt injection because it reads external, potentially untrusted code and git diffs and presents them to the LLM for review without proper isolation.
  • Ingestion points: run.py collects data via git diff and reads the full text of changed files (using Path.read_text).
  • Boundary markers: The review_prompt in run.py does not utilize specific delimiters (like XML tags or triple-backticks with ignore-instructions) to isolate the untrusted code content from the instructions.
  • Capability inventory: The skill utilizes the bash tool and has access to the local file system for reading and executing commands.
  • Sanitization: No sanitization, escaping, or content filtering is performed on the ingested code or diff data before it is interpolated into the prompt provided to the AI agent.
Audit Metadata
Risk Level
MEDIUM
Analyzed
Apr 1, 2026, 02:52 PM