openai-api

Pass

Audited by Gen Agent Trust Hub on Mar 15, 2026

Risk Level: SAFE
Full Analysis
  • [SAFE]: The skill serves as an educational resource for integrating with the official OpenAI API. All code snippets follow industry-standard implementation patterns.
  • [CREDENTIALS_UNSAFE]: No hardcoded secrets were detected. The skill explicitly advises users to 'Never expose API keys' and shows how to use environment variables or placeholders like 'sk-...' for authentication.
  • [EXTERNAL_DOWNLOADS]: The skill references the official 'openai' package for both Python and Node.js, along with standard validation libraries like 'pydantic' and 'zod'. These are well-known, trusted dependencies for the stated purpose.
  • [PROMPT_INJECTION]: While the skill involves building LLM-based applications which are susceptible to indirect prompt injection, it uses structured message roles (system, user, assistant) and provides examples of 'Structured Outputs' and 'JSON Schema', which are standard mitigations to maintain control over AI behavior.
  • [COMMAND_EXECUTION]: No unauthorized or dangerous system commands were found. The 'execute_tool' example is a mock function demonstrating how to process tool calls safely within an application context.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 15, 2026, 11:08 AM