ask-many-models

Fail

Audited by Gen Agent Trust Hub on Apr 7, 2026

Risk Level: HIGHCREDENTIALS_UNSAFECOMMAND_EXECUTIONDATA_EXFILTRATION
Full Analysis
  • [CREDENTIALS_UNSAFE]: The skill's setup wizard (bin/amm-setup) and core logic explicitly instruct users to store highly sensitive API keys (OpenAI, Anthropic, Google, xAI) in a plain-text .env file located at /Users/ph/.claude/skills/ask-many-models/.env. This violates security best practices for credential management, as these keys are stored unencrypted on the filesystem.
  • [COMMAND_EXECUTION]: The SKILL.md file contains instructions for the agent to execute shell commands that incorporate user-supplied slugs/filenames directly into command strings (e.g., echo "<prompt>" > /tmp/amm-prompt-draft-$(date +%s).md). If not properly sanitized by the platform, this presents a risk of command injection.
  • [DATA_EXFILTRATION]: While the skill's primary purpose is to send prompts to external AI APIs, the bin/amm script allows for the inclusion of arbitrary local files as context via a 'context picker' (e.g., gum file or find commands). While intended for research, this capability could be used to read and send sensitive local file content to multiple external providers simultaneously.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Apr 7, 2026, 02:01 PM