nlm
Fail
Audited by Gen Agent Trust Hub on Mar 17, 2026
Risk Level: HIGHCREDENTIALS_UNSAFEREMOTE_CODE_EXECUTIONCOMMAND_EXECUTIONEXTERNAL_DOWNLOADSPROMPT_INJECTIONDATA_EXFILTRATION
Full Analysis
- [CREDENTIALS_UNSAFE]: The skill requires the use of
nlm auth login, which extracts session cookies from an active Google Chrome session using the Chrome DevTools Protocol (CDP). These cookies are stored locally in~/.nlm/envto authenticate subsequent commands. This approach involves high-risk access to sensitive browser session data. - [DATA_EXFILTRATION]: The extraction of browser session cookies from the local environment into a tool-specific configuration file constitutes a significant data exposure and potential exfiltration risk, as these cookies provide full access to the user's Google NotebookLM account.
- [REMOTE_CODE_EXECUTION]: The skill's workflow documentation (references/workflows.md) includes a command that dynamically locates a Python script (
readwise_to_nlm.py) within a plugin cache directory using shell commands (ls,sort,tail) and then executes it. This dynamic path resolution and execution of external code creates a risk of running unauthorized or malicious scripts. - [COMMAND_EXECUTION]: Several examples in the documentation use shell command substitution and pipes (e.g.,
id=$(... | grep ... | cut ...)) to handle notebook identifiers. This pattern is vulnerable to command injection if the data being processed contains shell control characters. - [EXTERNAL_DOWNLOADS]: The skill includes functionality to download and process content from external URLs and PDF files through the
nlm addcommand, and it performs automated web searches usingnlm research. - [PROMPT_INJECTION]: The skill is susceptible to indirect prompt injection because it ingests and processes untrusted data from URLs, PDFs, and automated research results which are then used as context for AI-generated outputs (summaries, chat, and FAQs).
- Ingestion points: Untrusted content enters via
nlm add <url>,nlm add <pdf>, andnlm research <topic>. - Boundary markers: There are no explicit delimiters or instructions provided to the AI to ignore instructions embedded within the source materials.
- Capability inventory: The skill can execute the
nlmCLI, perform network requests viacurlandnlm, and write to the local filesystem. - Sanitization: No evidence of content sanitization or validation is present before data is passed to the generation commands.
Recommendations
- AI detected serious security threats
Audit Metadata