code-review-ai-ai-review
Pass
Audited by Gen Agent Trust Hub on Apr 14, 2026
Risk Level: SAFECOMMAND_EXECUTIONPROMPT_INJECTIONDATA_EXFILTRATION
Full Analysis
- [COMMAND_EXECUTION]: The skill contains Python scripts and shell snippets that execute local command-line tools including
sonar-scanner,semgrep, andtrufflehogviasubprocess.runand standard shell execution. These are used to perform static analysis and secret scanning. - [DATA_EXFILTRATION]: The provided orchestration code and GitHub Action configurations access sensitive environment variables, specifically
GITHUB_TOKEN,ANTHROPIC_API_KEY, andOPENAI_API_KEY. These credentials are used to communicate with external LLM services and the GitHub API to transmit code diffs and post review comments. - [PROMPT_INJECTION]: The skill is vulnerable to indirect prompt injection as it processes untrusted external data.
- Ingestion points: Arbitrary code content and pull request descriptions are ingested into the agent's context through the
$ARGUMENTSvariable inSKILL.mdand through file reads in the Python orchestration script. - Boundary markers: Prompts for the AI models use basic markdown headers to separate code from instructions but do not include strong delimiters or explicit warnings to the model to ignore instructions contained within the reviewed code.
- Capability inventory: The associated scripts and workflows have the ability to execute shell commands, read local file systems, and perform network requests to external APIs.
- Sanitization: There is no evidence of sanitization, escaping, or validation of the code diffs or PR metadata before they are interpolated into the LLM prompts.
Audit Metadata