memory-search

Fail

Audited by Gen Agent Trust Hub on Apr 1, 2026

Risk Level: HIGHDATA_EXFILTRATIONCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
  • [DATA_EXFILTRATION]: The get function in run.py is vulnerable to path traversal. It constructs the file path by joining a base directory with a user-supplied filename using filepath = _MEMORY_DIR / filename. Since pathlib.Path resolution allows absolute paths or directory traversal sequences (e.g., ../../) to override the base directory, an attacker can read sensitive system files outside of the intended memory directory.
  • [COMMAND_EXECUTION]: The search function in run.py passes the user-provided query argument directly to the grep command. While subprocess.run is called without shell=True, the lack of input sanitization allows for argument injection. A malicious query starting with a hyphen could inject unexpected flags into the grep execution.
  • [PROMPT_INJECTION]: The skill is susceptible to indirect prompt injection via the processing of untrusted conversation memories.
  • Ingestion points: Markdown files are read from the _MEMORY_DIR and returned to the agent in run.py via the get and search functions.
  • Boundary markers: No delimiters or safety instructions are used to separate memory content from agent instructions when the content is presented to the LLM.
  • Capability inventory: The skill has the capability to execute shell commands via subprocess.run in run.py and requires the bash tool as specified in SKILL.md.
  • Sanitization: No validation or sanitization is performed on the content of the memory files before they are provided to the agent context.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
Apr 1, 2026, 02:52 PM