session-anonymizer

Warn

Audited by Gen Agent Trust Hub on May 6, 2026

Risk Level: MEDIUMCREDENTIALS_UNSAFEEXTERNAL_DOWNLOADSCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
  • [CREDENTIALS_UNSAFE]: The encryption feature in scripts/anonymize.py passes the user-provided password as a cleartext command-line argument (-pass pass:password) to the openssl utility. This practice exposes the password to any other user on a shared system who can view the process list (e.g., using ps or top).
  • [EXTERNAL_DOWNLOADS]: The skill requires the installation of the opf library directly from a remote GitHub repository (github.com/openai/privacy-filter.git) as specified in the prerequisites section of SKILL.md.
  • [COMMAND_EXECUTION]: The script dynamically executes external binary tools including the opf privacy filter and the openssl encryption utility via the subprocess.run function.
  • [PROMPT_INJECTION]: The skill is susceptible to indirect prompt injection. In scripts/anonymize.py, the run_ollama function constructs an LLM prompt by directly concatenating raw, untrusted transcript text to a hardcoded instruction string. This allows content within the transcript to potentially override the agent's instructions.
  • Ingestion points: scripts/anonymize.py reads data from input files or standard input via the main() function.
  • Boundary markers: None. The transcript text is appended directly to the end of the prompt string.
  • Capability inventory: The script can execute subprocesses (opf, openssl), write files to the local system, and perform local network requests (urllib.request).
  • Sanitization: No sanitization or escaping is performed on the input text before it is interpolated into the prompt.
Audit Metadata
Risk Level
MEDIUM
Analyzed
May 6, 2026, 09:27 PM