look-at
Pass
Audited by Gen Agent Trust Hub on Mar 17, 2026
Risk Level: SAFEPROMPT_INJECTION
Full Analysis
- [PROMPT_INJECTION]: The skill is susceptible to indirect prompt injection because it processes external, untrusted data (files) and interpolates a user-provided 'goal' into a system prompt without robust delimiters or sanitization.
- Ingestion points: The
--fileand--goalarguments inscripts/look_at.pyare the entry points for untrusted content. - Boundary markers: The prompt template in
scripts/look_at.pyuses simple labels like 'Goal:' but lacks cryptographically secure delimiters or explicit 'ignore instructions' warnings for the file content. - Capability inventory: The script can read any file accessible to the user and perform network requests to Google's API.
- Sanitization: No input validation, escaping, or filtering is performed on the file content or the goal string before being sent to the LLM.
- [SAFE]: The skill uses the official
google-genailibrary to interact with Google's Gemini API, which is a well-known and trusted service. This network activity is fundamental to the skill's primary purpose. - [SAFE]: The instructions in
SKILL.mdfor discovering and executing the script via the shell are standard for the intended environment and do not involve suspicious privilege escalation or hidden commands.
Audit Metadata