gemini
Fail
Audited by Gen Agent Trust Hub on May 1, 2026
Risk Level: HIGHCOMMAND_EXECUTIONPROMPT_INJECTIONEXTERNAL_DOWNLOADS
Full Analysis
- [COMMAND_EXECUTION]: The skill constructs shell commands by interpolating variables like
${CONTEXT}and${USER_QUESTION}directly into double-quoted strings within the execution block (e.g.,gemini -p "... ${CONTEXT} ..."). Because these variables contain content from the local environment—such asgit diffoutputs, change logs (.notfair/change-log.json), and user-provided queries—a malicious file or specifically crafted input could include double quotes and shell metacharacters (like$()or backticks) to escape the intended command and execute arbitrary code on the user's host machine. - [PROMPT_INJECTION]: The skill is vulnerable to indirect prompt injection as it ingests untrusted data from the local environment and conversation history into the model's prompt without adequate protection.
- Ingestion points: Data enters the context via
git diffresults, local Google Ads change logs (.notfair/change-log.json), and previous MCP tool call outputs (mcp__notfair__*). - Boundary markers: The prompt relies on simple text headers (e.g.,
CONTEXT:,BEFORE:,AFTER:) rather than robust delimiters or explicit instructions for the model to ignore potential commands within the data. - Capability inventory: The skill executes shell commands (
gemini,git) and reads local files. - Sanitization: No sanitization or escaping is performed on the data before it is interpolated into the shell command or the LLM prompt.
- [EXTERNAL_DOWNLOADS]: The skill instructs the user to install an external global package,
@google/gemini-cli, via NPM. While this originates from a well-known service provider, the skill creates a dependency on an external binary and user-managed authentication, which increases the local attack surface if the binary is compromised or hijacked via path manipulation.
Recommendations
- AI detected serious security threats
Audit Metadata