openai-webhooks

Pass

Audited by Gen Agent Trust Hub on May 13, 2026

Risk Level: SAFE
Full Analysis
  • [SAFE]: Implements Standard Webhooks signature verification using HMAC-SHA256 to ensure the authenticity and integrity of incoming webhook payloads.
  • [SAFE]: Includes replay attack prevention logic by validating the webhook-timestamp header and ensuring it falls within a 5-minute window of the current server time.
  • [SAFE]: Employs timing-safe comparison methods (such as Node.js timingSafeEqual and Python hmac.compare_digest) to protect against side-channel timing attacks when verifying signatures.
  • [SAFE]: Enforces the use of environment variables for sensitive credentials like OPENAI_API_KEY and OPENAI_WEBHOOK_SECRET, preventing hardcoded secrets in codebase.
  • [SAFE]: Correctly identifies and documents the requirement for raw request body access, which is a critical security step for valid signature verification and prevents common implementation vulnerabilities.
  • [SAFE]: Evaluated for indirect prompt injection via OpenAI webhook payloads. While the skill ingests untrusted external data, it lacks exploitable capabilities as the handling logic is limited to logging and status updates without subprocess execution or dynamic code evaluation.
  • Ingestion points: Webhook receiver endpoints in Express, FastAPI, and Next.js example implementations.
  • Boundary markers: Not explicitly present in console logging, but data is not passed to subsequent AI prompts.
  • Capability inventory: Logic is restricted to logging event IDs and types to the console across all provided scripts.
  • Sanitization: Input is parsed as JSON after verification, with no downstream execution or shell interpolation.
Audit Metadata
Risk Level
SAFE
Analyzed
May 13, 2026, 07:14 AM