postgresql-ai-platform
Pass
Audited by Gen Agent Trust Hub on Apr 18, 2026
Risk Level: SAFE
Full Analysis
- [SAFE]: The skill is instructional and contains no executable scripts, binaries, or hidden commands that would run at load time or execution.
- [SAFE]: Architectural patterns include robust security recommendations, such as using row-level security (RLS) as a safety layer and enforcing statement timeouts on AI-generated queries.
- [SAFE]: The guidance explicitly warns against common security anti-patterns, such as passing LLM output directly to SQL or querying embeddings without metadata filters in multi-tenant systems.
- [SAFE]: Use of external LLM extensions is accompanied by instructions to classify data sensitivity and perform privacy reviews before sending data to third-party APIs.
Audit Metadata