llm-wiki

Pass

Audited by Gen Agent Trust Hub on Apr 25, 2026

Risk Level: SAFEPROMPT_INJECTIONCOMMAND_EXECUTION
Full Analysis
  • [PROMPT_INJECTION]: The skill is designed to ingest external data (articles, papers, transcripts) into a persistent wiki, creating an attack surface for indirect prompt injection where malicious content in processed sources could influence future AI behavior.\n
  • Ingestion points: The agent reads content from the raw/ directory during the ingest workflow as documented in references/ingest-workflow.md.\n
  • Boundary markers: While the skill suggests the use of frontmatter and specific sections like 'Where this fits', it lacks strict structural delimiters to isolate untrusted content.\n
  • Capability inventory: The agent has the capability to write to the local file system (str_replace) and execute provided Python scripts for wiki operations.\n
  • Sanitization: The skill relies on natural language instructions for the AI to paraphrase content and hedge claims, rather than programmatic sanitization.\n- [COMMAND_EXECUTION]: The skill's instructions direct the AI agent to run several local Python scripts provided in the skill package for administrative tasks.\n
  • Evidence: SKILL.md and related reference documents include bash commands for python scripts/init_wiki.py, python scripts/wiki_search.py, python scripts/wiki_lint.py, and python scripts/wiki_stats.py.
Audit Metadata
Risk Level
SAFE
Analyzed
Apr 25, 2026, 04:14 PM