notebooklm

Pass

Audited by Gen Agent Trust Hub on May 4, 2026

Risk Level: SAFEEXTERNAL_DOWNLOADSCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
  • [EXTERNAL_DOWNLOADS]: The skill instructions direct users to download and install code from external GitHub repositories: https://github.com/PleasePrompto/notebooklm-skill.git and https://github.com/akillness/oh-my-skills.
  • [COMMAND_EXECUTION]: The skill requires executing local Python scripts (scripts/run.py) to manage authentication, notebook libraries, and queries. It also uses the patchright CLI to install browser binaries.
  • [PROMPT_INJECTION]: The skill is susceptible to indirect prompt injection because it ingests and processes content from external notebooks that could contain malicious instructions.
  • Ingestion points: Data is retrieved from notebooklm.google.com via browser automation (SKILL.md).
  • Boundary markers: No specific boundary markers or instructions to ignore embedded commands are used when interpolating notebook content into the agent context.
  • Capability inventory: The skill uses the Bash, Write, Read, Glob, and Grep tools, allowing for file system modifications and shell command execution.
  • Sanitization: No sanitization or validation of the retrieved notebook content is performed before it is processed by the agent.
Audit Metadata
Risk Level
SAFE
Analyzed
May 4, 2026, 12:41 PM