langchain-js

Pass

Audited by Gen Agent Trust Hub on Mar 29, 2026

Risk Level: SAFEEXTERNAL_DOWNLOADSPROMPT_INJECTION
Full Analysis
  • [EXTERNAL_DOWNLOADS]: The skill documentation includes instructions to install various official LangChain ecosystem packages, such as 'langchain', '@langchain/openai', '@langchain/anthropic', and '@langchain/langgraph'.\n- [PROMPT_INJECTION]: The Retrieval-Augmented Generation (RAG) examples demonstrate a surface for indirect prompt injection.\n
  • Ingestion points: Data is loaded from external sources using 'WebLoader', 'TextLoader', and 'PDFLoader' within 'SKILL.md'.\n
  • Boundary markers: The provided prompt template uses a boundary instruction ('Answer the question based only on the following context') to delimit retrieved content.\n
  • Capability inventory: The skill demonstrates LLM invocation ('ChatOpenAI') and the use of tools ('TavilySearchResults', 'Calculator') that could be triggered by processed content.\n
  • Sanitization: No explicit input sanitization or validation is shown for the loaded content.
Audit Metadata
Risk Level
SAFE
Analyzed
Mar 29, 2026, 07:53 AM