skills/juliusbrussee/caveman/compress/Gen Agent Trust Hub

compress

Pass

Audited by Gen Agent Trust Hub on May 1, 2026

Risk Level: SAFEDATA_EXFILTRATIONCOMMAND_EXECUTIONPROMPT_INJECTION
Full Analysis
  • [DATA_EXFILTRATION]: The skill reads local file contents and transmits them to the Anthropic API (via the anthropic Python library or the claude CLI) for processing. While this is the intended functionality, it involves crossing a data boundary. The implementation includes a robust heuristic filter in scripts/compress.py (is_sensitive_path) that blocks files with sensitive names (e.g., .env, credentials, id_rsa) or those located in sensitive directories (e.g., .ssh, .aws) to mitigate accidental credential exposure.
  • [COMMAND_EXECUTION]: The skill uses subprocess.run in scripts/compress.py to execute the claude CLI tool as a fallback mechanism. The command is executed with a static argument list (["claude", "--print"]) rather than a shell string, which prevents shell command injection.
  • [PROMPT_INJECTION]: The skill is susceptible to indirect prompt injection. It interpolates untrusted file content directly into LLM prompts in scripts/compress.py (build_compress_prompt and build_fix_prompt) without using boundary markers (like XML tags) or escaping. An attacker could place instructions inside a markdown file that might influence the LLM's compression behavior, although the impact is limited to the content of the file being overwritten.
  • Ingestion points: Reads text from arbitrary user-specified files in scripts/compress.py.
  • Boundary markers: None used in the compression prompts.
  • Capability inventory: Filesystem read/write (overwriting compressed files), network access via Anthropic API, and local command execution via subprocess.run.
  • Sanitization: Implements file extension checks and a sensitive path denylist, but does not sanitize the content itself before prompt interpolation.
Audit Metadata
Risk Level
SAFE
Analyzed
May 1, 2026, 12:59 PM