langchain-architecture

Pass

Audited by Gen Agent Trust Hub on Mar 24, 2026

Risk Level: SAFEPROMPT_INJECTION
Full Analysis
  • [PROMPT_INJECTION]: The skill demonstrates application architectures that are susceptible to indirect prompt injection due to the way external data is interpolated into instructions.
  • Ingestion points: The documentation describes patterns using TextLoader to ingest local documents and SequentialChain to process arbitrary text input variables.
  • Boundary markers: The provided PromptTemplate examples (e.g., template="Extract key entities from: {text}\n\nEntities:") directly interpolate content into instructions without using delimiters such as XML tags, triple quotes, or explicit 'ignore embedded instructions' warnings.
  • Capability inventory: The skill documents the creation of tools with significant capabilities, including database searching (search_database) and email dispatching (send_email), which could be abused if an injection occurs.
  • Sanitization: There is no mention or demonstration of input validation, escaping, or content filtering for the data being passed into the LLM components.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 24, 2026, 08:44 AM