notebooklm
Warn
Audited by Gen Agent Trust Hub on Apr 19, 2026
Risk Level: MEDIUMEXTERNAL_DOWNLOADSCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
- [COMMAND_EXECUTION]: The shell script
scripts/nlm-generate.shuses theevalcommand to execute a string constructed from theNOTEBOOK_IDandREPORT_FORMATvariables. These variables are supplied via agent input and are not sanitized, allowing for arbitrary command execution if an attacker can influence those values (e.g., through a malicious notebook title or suggested format via indirect injection). - [EXTERNAL_DOWNLOADS]: The skill requires the
notebooklm-pyPython package, which is an unofficial third-party library. Installation instructions innotebooklm.mdsuggest installing directly from a GitHub repository (teng-lin/notebooklm-py) usingpip install git+..., which bypasses standard registry protections and introduces supply chain risks. - [PROMPT_INJECTION]: The
SKILL.mdfile contains instructions using authoritative and coercive language (e.g., 'YOU MUST invoke this skill', 'Failure to... violates your operational requirements') to force the agent to use the tool and potentially override its internal safety reasoning. - [PROMPT_INJECTION]: The skill is designed to ingest and process untrusted external data such as URLs and YouTube transcripts. It lacks explicit boundary markers or sanitization for this content, making it vulnerable to indirect prompt injection where malicious instructions in a processed source could trick the agent into misusing its filesystem access or command execution capabilities.
- [EXTERNAL_DOWNLOADS]: The installation guide in
notebooklm.mduses a pattern of fetching version tags viacurland piping to string processing tools to resolve installation targets, which is a risky practice when targeting unverified third-party repositories.
Audit Metadata