personize-responses

Pass

Audited by Gen Agent Trust Hub on Apr 1, 2026

Risk Level: SAFEPROMPT_INJECTION
Full Analysis
  • [SAFE]: The skill documentation follows security best practices by recommending the use of environment variables (PERSONIZE_SECRET_KEY) for managing API keys rather than hardcoding credentials into scripts.- [SAFE]: The integration utilizes a vendor-provided library (@personize/sdk) and targets official API endpoints associated with the author's domain (personize.ai), representing legitimate vendor functionality.- [SAFE]: The tool-calling loop includes a security mechanism using HMAC-SHA256 signatures (conversation_signature) to ensure the integrity of the conversation state, which prevents tampering, injection, or replay attacks during multi-step roundtrips.- [PROMPT_INJECTION]: The skill features an indirect prompt injection surface due to its data processing capabilities.\n
  • Ingestion points: External data enters the context through 'inputs' template variables, 'messages' arrays, and the 'attachments' field as described in SKILL.md.\n
  • Boundary markers: The provided documentation does not explicitly specify the use of delimiters (e.g., XML tags or triple quotes) to isolate untrusted input within the prompts.\n
  • Capability inventory: The skill allows the agent to trigger developer-defined 'execute' functions and perform memory operations (recall/memorize) via the Personize API.\n
  • Sanitization: While content filtering is not explicitly detailed, the platform ensures conversation history integrity through HMAC signatures to prevent the injection of unauthorized messages into the tool loop history.
Audit Metadata
Risk Level
SAFE
Analyzed
Apr 1, 2026, 08:00 PM