llm-council

Pass

Audited by Gen Agent Trust Hub on May 4, 2026

Risk Level: SAFECOMMAND_EXECUTIONDATA_EXFILTRATIONPROMPT_INJECTION
Full Analysis
  • [COMMAND_EXECUTION]: The skill instructs the agent to construct shell commands by interpolating user-provided prompts or file content into CLI tools like codex exec and gemini. If the input content contains shell metacharacters (such as backticks, semicolons, or dollar signs) and is not properly escaped by the agent's tool execution layer, it could lead to arbitrary command execution on the host system.
  • [DATA_EXFILTRATION]: By design, the skill reads local files (e.g., CLAUDE.md, code files) and pastes their content into prompts sent to external LLM providers (OpenAI/Google) via CLI tools. While this is the intended purpose, it represents a data exposure risk if the agent inadvertently includes sensitive files or credentials in the context sent to these external services.
  • [PROMPT_INJECTION]: The skill is susceptible to indirect prompt injection as it processes untrusted data from local files and past conversation history.
  • Ingestion points: Reads local files (using Read, Glob, Grep) and conversation excerpts in SKILL.md under the 'Context passing' section.
  • Boundary markers: The instructions do not define clear delimiters or 'ignore embedded instructions' warnings for the data being passed into the CLI prompts.
  • Capability inventory: The skill utilizes the Bash tool for external API interaction and the Write tool to save transcripts in SKILL.md.
  • Sanitization: There are no explicit instructions for sanitizing or escaping the content before it is interpolated into the shell command strings.
Audit Metadata
Risk Level
SAFE
Analyzed
May 4, 2026, 05:20 AM