skills/niracler/skill/note-to-blog/Gen Agent Trust Hub

note-to-blog

Pass

Audited by Gen Agent Trust Hub on May 14, 2026

Risk Level: SAFEPROMPT_INJECTIONDATA_EXFILTRATIONCOMMAND_EXECUTION
Full Analysis
  • [PROMPT_INJECTION]: The skill is vulnerable to indirect prompt injection. It ingests content from external sources (Obsidian vault and Claude history) and interpolates it into LLM evaluation prompts without sanitization or strict boundary markers.
  • Ingestion points: Markdown files in the note repository, ~/.claude/history.jsonl, and project-specific sessions-index.json files.
  • Boundary markers: The prompt template in references/scoring-criteria.md uses Markdown headers to separate user content, but lacks explicit instructions to ignore embedded commands or bypass instructions.
  • Capability inventory: The skill can execute local Python scripts, perform file system traversal, write new Markdown files to a blog repository, and spawn parallel sub-agents using the Task tool.
  • Sanitization: No escaping or validation is performed on the ingested note text or history logs before they are placed in the prompt.
  • [DATA_EXFILTRATION]: The skill accesses sensitive application data stored locally by the Claude Code CLI.
  • Evidence: scripts/note-to-blog.py reads interaction history from ~/.claude/history.jsonl and session data from ~/.claude/projects/.
  • Description: These files contain the full history of the user's interactions with the AI, which may include sensitive code, logic, or data discussed in previous sessions. While used for the intended purpose of 'Session activity signals,' this exposes private logs to the agent's context.
  • [COMMAND_EXECUTION]: The skill uses local script execution and agent task dispatching to manage the file pipeline.
  • Evidence: SKILL.md and references/agent-instructions.md detail the use of scripts/note-to-blog.py to collect data, convert formats, and manage pipeline state. It also uses the Task tool to create sub-agents that write to the blog repository.
Audit Metadata
Risk Level
SAFE
Analyzed
May 14, 2026, 02:37 AM