langchain-architecture
Pass
Audited by Gen Agent Trust Hub on Mar 24, 2026
Risk Level: SAFEPROMPT_INJECTION
Full Analysis
- [PROMPT_INJECTION]: The skill demonstrates application architectures that are susceptible to indirect prompt injection due to the way external data is interpolated into instructions.
- Ingestion points: The documentation describes patterns using
TextLoaderto ingest local documents andSequentialChainto process arbitrarytextinput variables. - Boundary markers: The provided
PromptTemplateexamples (e.g.,template="Extract key entities from: {text}\n\nEntities:") directly interpolate content into instructions without using delimiters such as XML tags, triple quotes, or explicit 'ignore embedded instructions' warnings. - Capability inventory: The skill documents the creation of tools with significant capabilities, including database searching (
search_database) and email dispatching (send_email), which could be abused if an injection occurs. - Sanitization: There is no mention or demonstration of input validation, escaping, or content filtering for the data being passed into the LLM components.
Audit Metadata