HA Integration Dev

Pass

Audited by Gen Agent Trust Hub on Apr 17, 2026

Risk Level: SAFEPROMPT_INJECTIONEXTERNAL_DOWNLOADS
Full Analysis
  • [PROMPT_INJECTION]: The LLM conversation agent template in templates/conversation-agent/ contains a vulnerability surface for indirect prompt injection.
  • Ingestion points: Untrusted data enters the agent context via user_input.text in templates/conversation-agent/conversation_agent.py (lines 92-120).
  • Boundary markers: The system prompt implementation in templates/conversation-agent/conversation_agent.py (lines 142-171) does not use robust delimiters (like XML tags or random nonces) to separate system instructions from untrusted conversation history and user input.
  • Capability inventory: The agent utilizes self.hass.services.async_call in templates/conversation-agent/conversation_agent.py (line 330) to execute actions based on LLM output, granting it broad control over Home Assistant services.
  • Sanitization: There is no hard-coded whitelist to restrict which services the LLM can invoke; it relies on the model following the prompt's internal rules.
  • [EXTERNAL_DOWNLOADS]: The conversation agent template includes logic to communicate with external LLM providers including OpenAI, Anthropic, and local Ollama instances for processing user requests and managing conversation state.
Audit Metadata
Risk Level
SAFE
Analyzed
Apr 17, 2026, 08:27 AM