firebase-ai-logic

Pass

Audited by Gen Agent Trust Hub on Apr 22, 2026

Risk Level: SAFEPROMPT_INJECTION
Full Analysis
  • [PROMPT_INJECTION]: The skill is susceptible to indirect prompt injection because it ingests untrusted user data into the agent context and lacks explicit boundary markers or sanitization logic.
  • Ingestion points: The generateText, analyzeImage, and sendMessage functions in references/usage_patterns_web.md accept unvalidated user prompts and file data.
  • Boundary markers: None are present in the provided code snippets to delimit user content from system instructions.
  • Capability inventory: The skill facilitates network operations via the Firebase SDK to communicate with Gemini models.
  • Sanitization: There is no evidence of input validation, escaping, or filtering of external content before it is interpolated into model calls.
Audit Metadata
Risk Level
SAFE
Analyzed
Apr 22, 2026, 01:51 PM