neo4j-genai-plugin-skill
Pass
Audited by Gen Agent Trust Hub on May 12, 2026
Risk Level: SAFEPROMPT_INJECTIONDATA_EXFILTRATIONCREDENTIALS_UNSAFE
Full Analysis
- [PROMPT_INJECTION]: The skill documentation describes patterns for indirect prompt injection where untrusted data from the database is used to build LLM prompts.
- Ingestion points:
SKILL.md(e.g., using properties likec.text,p.descriptionor parameters like$textand$questionin prompts). - Boundary markers: Absent in most examples; instructions use simple string concatenation (e.g.,
'Summarize: ' + $text). - Capability inventory:
SKILL.md(includes the ability to perform graph write operations likeSETandMERGEbased on structured output from the LLM). - Sanitization: No explicit sanitization or escaping of database content is demonstrated before interpolation into prompts.
- [DATA_EXFILTRATION]: The skill facilitates the transmission of data from the Neo4j database to external third-party LLM providers (OpenAI, Azure OpenAI, Google VertexAI, and Amazon Bedrock). This is the intended functionality of the plugin, but it establishes a data flow where potentially sensitive graph content is sent to external APIs.
- [CREDENTIALS_UNSAFE]: The skill follows security best practices for secret management. It includes explicit warnings (e.g., 'Never hardcode API key literals') and consistently demonstrates the use of parameters (
$param) for providing API tokens and access keys to theai.text.*functions.
Audit Metadata