multi-llm-consult
Warn
Audited by Gen Agent Trust Hub on Mar 29, 2026
Risk Level: MEDIUMDATA_EXFILTRATIONCREDENTIALS_UNSAFEPROMPT_INJECTIONCOMMAND_EXECUTION
Full Analysis
- [DATA_EXFILTRATION]: The script
scripts/consult_llm.pyis used to send prompts and API credentials to external LLM providers. - The script allows overriding the destination API endpoint via the
--base-urlcommand-line argument. If an attacker can control the arguments passed to this script, they could redirect the prompt and sensitive API keys to a malicious server. - For the Gemini provider, the API key is transmitted as a query parameter within the URL string, which is a less secure method than header-based authentication as it can be exposed in server logs or system monitoring.
- [CREDENTIALS_UNSAFE]: The script reads sensitive API keys from a centralized configuration file located at
~/.claude/settings.jsonand from various environment variables (e.g.,OPENAI_API_KEY,GEMINI_API_KEY,DASHSCOPE_API_KEY). - [PROMPT_INJECTION]: The skill has a surface for indirect prompt injection due to how it handles external data.
- Ingestion points: The script reads untrusted content from the
--prompt,--prompt-file, and--context-filearguments, as well as from standard input. - Boundary markers: The script interpolates prompt and context data using simple newline characters without any delimiters or instructions to the external model to ignore embedded instructions in the context.
- Capability inventory: The script has the capability to perform network requests and read local files.
- Sanitization: No input sanitization or validation is performed on the content before it is forwarded to the external LLM providers.
- [COMMAND_EXECUTION]: The skill functions by executing a local Python script
scripts/consult_llm.pywhich takes multiple user-controlled arguments to perform network and file operations.
Audit Metadata