notebooklm-cli

Pass

Audited by Gen Agent Trust Hub on Apr 1, 2026

Risk Level: SAFECOMMAND_EXECUTIONDATA_EXFILTRATIONPROMPT_INJECTION
Full Analysis
  • [COMMAND_EXECUTION]: The skill executes the nlm binary using Python's subprocess.run with arguments passed as a list. This approach effectively prevents shell injection vulnerabilities. It also provides a raw command mode that allows arbitrary subcommands while explicitly blocking interactive sessions like chat start to maintain agent control.
  • [DATA_EXFILTRATION]: The skill implements functionality to add local files (add_file) and remote URLs (add_url) to NotebookLM notebooks. While this allows the agent to read filesystem content or external web data and transmit it to the NotebookLM service, this is the core intended functionality of the skill for source management.
  • [PROMPT_INJECTION]: The skill has an attack surface for indirect prompt injection because it processes content from untrusted external sources (URLs and files).
  • Ingestion points: External data is ingested through the add_url, add_file, and query operations in run.py.
  • Boundary markers: The skill does not currently use explicit delimiters or "ignore" instructions when passing ingested content to the underlying CLI.
  • Capability inventory: The skill possesses command execution capabilities via subprocess.run to interact with the NotebookLM toolset.
  • Sanitization: Command-line arguments are safely handled via subprocess list execution and shlex.quote for logging, but the actual content of the ingested files or URLs is not sanitized or validated for malicious instructions.
Audit Metadata
Risk Level
SAFE
Analyzed
Apr 1, 2026, 02:52 PM