notebooklm

Pass

Audited by Gen Agent Trust Hub on Apr 7, 2026

Risk Level: SAFECOMMAND_EXECUTIONEXTERNAL_DOWNLOADSPROMPT_INJECTION
Full Analysis
  • [COMMAND_EXECUTION]: The script scripts/import_sources.py executes the notebooklm command-line utility via subprocess.run. The use of a list of arguments instead of a single shell string prevents shell injection vulnerabilities.
  • [EXTERNAL_DOWNLOADS]: The skill requires the installation of the notebooklm-py package and Playwright browser binaries as part of its setup process.
  • [CREDENTIALS_UNSAFE]: The documentation mentions the path ~/.notebooklm/storage_state.json where authentication cookies are stored by the external CLI tool. The skill's own scripts do not directly access or exfiltrate this credential file.
  • [PROMPT_INJECTION]: The skill imports untrusted research data from external sources into the vault, creating a surface for indirect prompt injection. 1. Ingestion points: Data is imported via scripts/import_sources.py, scripts/extract_passages.py, and scripts/resolve_citations.py. 2. Boundary markers: The skill does not wrap imported content in delimiters or include instructions to ignore embedded commands. 3. Capability inventory: The skill can execute system commands via the notebooklm CLI. 4. Sanitization: No sanitization or escaping of the imported text is performed before it is written to the vault.
Audit Metadata
Risk Level
SAFE
Analyzed
Apr 7, 2026, 04:58 PM