skills/edwinhu/workflows/look-at/Gen Agent Trust Hub

look-at

Pass

Audited by Gen Agent Trust Hub on Mar 17, 2026

Risk Level: SAFEPROMPT_INJECTION
Full Analysis
  • [PROMPT_INJECTION]: The skill is susceptible to indirect prompt injection because it processes external, untrusted data (files) and interpolates a user-provided 'goal' into a system prompt without robust delimiters or sanitization.
  • Ingestion points: The --file and --goal arguments in scripts/look_at.py are the entry points for untrusted content.
  • Boundary markers: The prompt template in scripts/look_at.py uses simple labels like 'Goal:' but lacks cryptographically secure delimiters or explicit 'ignore instructions' warnings for the file content.
  • Capability inventory: The script can read any file accessible to the user and perform network requests to Google's API.
  • Sanitization: No input validation, escaping, or filtering is performed on the file content or the goal string before being sent to the LLM.
  • [SAFE]: The skill uses the official google-genai library to interact with Google's Gemini API, which is a well-known and trusted service. This network activity is fundamental to the skill's primary purpose.
  • [SAFE]: The instructions in SKILL.md for discovering and executing the script via the shell are standard for the intended environment and do not involve suspicious privilege escalation or hidden commands.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 17, 2026, 02:35 AM