skills/tanstack/ai/ai-core/Gen Agent Trust Hub

ai-core

Pass

Audited by Gen Agent Trust Hub on May 14, 2026

Risk Level: SAFE
Full Analysis
  • [SAFE]: No security issues detected. The analysis confirms that the files contain legitimate documentation, architectural patterns, and illustrative code snippets for the TanStack AI library.
  • [CREDENTIALS_SAFE]: The documentation correctly instructs developers to use environment variables for sensitive API keys (e.g., OPENAI_API_KEY, ANTHROPIC_API_KEY). All code examples use placeholders (e.g., 'sk-...') or environment variable references rather than hardcoded secrets.
  • [INDIRECT_PROMPT_INJECTION]: While the framework is designed to handle untrusted user and LLM content, the documentation promotes the use of strong boundary markers (systemPrompts) and mandatory runtime schema validation using libraries like Zod, ArkType, and Valibot. These practices provide significant protection against typical injection and schema confusion attacks in production environments.
  • [EXTERNAL_DOWNLOADS]: All external references and package imports target well-known, trusted registries (NPM) and official provider APIs (OpenAI, Anthropic, Google, etc.). No unauthorized or suspicious remote code execution patterns were identified.
Audit Metadata
Risk Level
SAFE
Analyzed
May 14, 2026, 09:46 PM