ai-choosing-architecture

Pass

Audited by Gen Agent Trust Hub on May 8, 2026

Risk Level: SAFEPROMPT_INJECTIONCOMMAND_EXECUTIONEXTERNAL_DOWNLOADS
Full Analysis
  • [PROMPT_INJECTION]: The skill describes architectures (such as RAG, agents, and multi-stage pipelines) that process untrusted external data. These architectures incorporate modules like ReAct, ProgramOfThought, and CodeAct, which possess significant capabilities including tool use and code execution. This creates a potential surface for indirect prompt injection attacks if the ingested data is not properly validated or sanitized.
  • Ingestion points: The skill suggests ingesting user-provided support tickets (examples.md), external document passages (examples.md, reference.md), and raw messages (reference.md).
  • Boundary markers: None are provided in the code templates to separate untrusted data from system instructions.
  • Capability inventory: The recommended modules include dspy.ReAct (runtime tool selection), dspy.ProgramOfThought (dynamic code generation and execution), and dspy.CodeAct (advanced code execution).
  • Sanitization: No sanitization or escaping mechanisms are shown in the provided implementation examples.
  • [COMMAND_EXECUTION]: A code skeleton in reference.md provides an example of a calculator tool using the Python eval() function. While the example includes a comment advising the use of a safe evaluator in production, providing templates with dynamic code execution on raw string input is a risky practice that could lead to command execution vulnerabilities if adopted without modifications.
  • [EXTERNAL_DOWNLOADS]: The instructions suggest that users install additional components from the author's own collection using the npx skills add command. This utilizes the standard npm registry to fetch resources from the vendor's repository.
Audit Metadata
Risk Level
SAFE
Analyzed
May 8, 2026, 10:53 AM