personize-diagnostics

Pass

Audited by Gen Agent Trust Hub on Mar 20, 2026

Risk Level: SAFE
Full Analysis
  • [PROMPT_INJECTION]: The skill facilitates the retrieval of memorized data and guidelines to verify the system state. This content is intended for inclusion in LLM prompts, creating a surface for indirect prompt injection. This is a functional aspect of memory-retrieval and diagnostic systems.
  • Ingestion points: Data enters the context via client.memory.recall, client.memory.smartRecall, client.memory.smartDigest, and client.ai.smartGuidelines calls.
  • Boundary markers: The provided documentation and code templates do not include specific delimiters to isolate untrusted content from the system prompt.
  • Capability inventory: The skill uses client.ai.prompt for generation and client.memory.memorize for recording interaction results.
  • Sanitization: There is no evidence of content sanitization or instruction filtering in the provided diagnostic examples.
  • [EXTERNAL_DOWNLOADS]: References the official @personize/sdk and @trigger.dev/sdk libraries for integration and diagnostics.
  • [COMMAND_EXECUTION]: Includes instructions for running npx trigger.dev locally to debug and verify pipeline tasks.
  • [DATA_EXFILTRATION]: Conducts legitimate network operations to the personize.ai domain for troubleshooting and data retrieval. The skill correctly instructs developers to store API keys in environment variables (PERSONIZE_SECRET_KEY) rather than hardcoding them.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 20, 2026, 01:02 AM