ask-many-models
Fail
Audited by Gen Agent Trust Hub on Apr 7, 2026
Risk Level: HIGHCREDENTIALS_UNSAFECOMMAND_EXECUTIONDATA_EXFILTRATION
Full Analysis
- [CREDENTIALS_UNSAFE]: The skill's setup wizard (
bin/amm-setup) and core logic explicitly instruct users to store highly sensitive API keys (OpenAI, Anthropic, Google, xAI) in a plain-text.envfile located at/Users/ph/.claude/skills/ask-many-models/.env. This violates security best practices for credential management, as these keys are stored unencrypted on the filesystem. - [COMMAND_EXECUTION]: The
SKILL.mdfile contains instructions for the agent to execute shell commands that incorporate user-supplied slugs/filenames directly into command strings (e.g.,echo "<prompt>" > /tmp/amm-prompt-draft-$(date +%s).md). If not properly sanitized by the platform, this presents a risk of command injection. - [DATA_EXFILTRATION]: While the skill's primary purpose is to send prompts to external AI APIs, the
bin/ammscript allows for the inclusion of arbitrary local files as context via a 'context picker' (e.g.,gum fileorfindcommands). While intended for research, this capability could be used to read and send sensitive local file content to multiple external providers simultaneously.
Recommendations
- AI detected serious security threats
Audit Metadata