skills/ar9av/obsidian-wiki/llm-wiki/Gen Agent Trust Hub

llm-wiki

Pass

Audited by Gen Agent Trust Hub on May 11, 2026

Risk Level: SAFEPROMPT_INJECTION
Full Analysis
  • [PROMPT_INJECTION]: The skill architecture is designed to process untrusted external data, creating a potential surface for Indirect Prompt Injection. This is documented as a structural risk rather than an active exploit.
  • Ingestion points: The skill identifies multiple ingestion points for untrusted data in 'SKILL.md', including external source directories (OBSIDIAN_SOURCES_DIR) and various AI platform history logs (Claude, Copilot, Hermes, etc.).
  • Boundary markers: There are no instructions for implementing boundary markers or delimiting untrusted input to prevent the agent from accidentally executing embedded instructions during the distillation process.
  • Capability inventory: The framework leverages file-reading and searching capabilities ('Read', 'Grep') and provides shell logic for environment configuration resolution.
  • Sanitization: The architecture does not define validation or sanitization steps for the ingested content before it is synthesized by the agent.
Audit Metadata
Risk Level
SAFE
Analyzed
May 11, 2026, 08:22 AM