notebooklm

Warn

Audited by Gen Agent Trust Hub on Apr 19, 2026

Risk Level: MEDIUMEXTERNAL_DOWNLOADSCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
  • [COMMAND_EXECUTION]: The shell script scripts/nlm-generate.sh uses the eval command to execute a string constructed from the NOTEBOOK_ID and REPORT_FORMAT variables. These variables are supplied via agent input and are not sanitized, allowing for arbitrary command execution if an attacker can influence those values (e.g., through a malicious notebook title or suggested format via indirect injection).
  • [EXTERNAL_DOWNLOADS]: The skill requires the notebooklm-py Python package, which is an unofficial third-party library. Installation instructions in notebooklm.md suggest installing directly from a GitHub repository (teng-lin/notebooklm-py) using pip install git+..., which bypasses standard registry protections and introduces supply chain risks.
  • [PROMPT_INJECTION]: The SKILL.md file contains instructions using authoritative and coercive language (e.g., 'YOU MUST invoke this skill', 'Failure to... violates your operational requirements') to force the agent to use the tool and potentially override its internal safety reasoning.
  • [PROMPT_INJECTION]: The skill is designed to ingest and process untrusted external data such as URLs and YouTube transcripts. It lacks explicit boundary markers or sanitization for this content, making it vulnerable to indirect prompt injection where malicious instructions in a processed source could trick the agent into misusing its filesystem access or command execution capabilities.
  • [EXTERNAL_DOWNLOADS]: The installation guide in notebooklm.md uses a pattern of fetching version tags via curl and piping to string processing tools to resolve installation targets, which is a risky practice when targeting unverified third-party repositories.
Audit Metadata
Risk Level
MEDIUM
Analyzed
Apr 19, 2026, 02:57 AM