notebooklm
Warn
Audited by Gen Agent Trust Hub on Mar 29, 2026
Risk Level: MEDIUMCREDENTIALS_UNSAFECOMMAND_EXECUTIONEXTERNAL_DOWNLOADSPROMPT_INJECTIONREMOTE_CODE_EXECUTION
Full Analysis
- [PROMPT_INJECTION]: The SKILL.md file defines a 'Follow-Up Mechanism' using imperative language like 'Required Claude Behavior' and 'STOP
- Do not immediately respond to user', which attempts to override the agent's internal control flow and standard response logic.
- [CREDENTIALS_UNSAFE]: The scripts/auth_manager.py script captures Google authentication cookies and persists them in a local file at data/browser_state/state.json, which provides persistent session access to the user's Google account.
- [COMMAND_EXECUTION]: Multiple scripts, including scripts/run.py and scripts/setup_environment.py, use subprocess.run() to execute shell commands for environment management and script execution.
- [EXTERNAL_DOWNLOADS]: The scripts/setup_environment.py script downloads and installs the patchright automation library and a full Google Chrome browser binary from the internet during initialization. patchright is a third-party 'stealth' fork of Playwright.
- [REMOTE_CODE_EXECUTION]: The scripts/run.py script dynamically constructs and executes shell commands to run project scripts based on command-line arguments, creating a surface for unintended code execution.
- [PROMPT_INJECTION]: The skill ingests untrusted content from Google NotebookLM via scripts/ask_question.py, creating a surface for indirect prompt injection. Ingestion point: page queries in scripts/ask_question.py. Boundary markers: None present. Capability inventory: Shell command execution via scripts/run.py. Sanitization: No validation of external content is performed before processing.
Audit Metadata