skill-comply
Pass
Audited by Gen Agent Trust Hub on May 12, 2026
Risk Level: SAFEPROMPT_INJECTIONCOMMAND_EXECUTION
Full Analysis
- [PROMPT_INJECTION]: The skill is susceptible to indirect prompt injection as it ingests untrusted content from user-provided skill or rule files to automatically generate behavioral specifications, test scenarios, and sandbox setup commands. An attacker could craft a malicious skill file to influence the LLM's output, potentially leading to the generation of unintended setup commands or agent prompts.
- Ingestion points: Skill/rule file path provided via the CLI in
scripts/run.py. - Boundary markers: Prompt templates in the
prompts/directory wrap external skill content with triple-dash (---) delimiters to help the LLM distinguish instructions from data. - Capability inventory: The skill can execute shell commands via
subprocess.run(inscripts/runner.pyfor sandbox setup) and invoke theclaudeagent withBash,Write, andEdittools enabled. - Sanitization:
scripts/runner.pyemploys a_safe_sandbox_dirfunction that usespath.resolve().relative_to()to verify that all sandbox operations are confined to/tmp/skill-comply-sandbox, effectively preventing path traversal attacks. - [COMMAND_EXECUTION]: The skill uses the
subprocessmodule to execute external binaries and shell commands. It invokes theclaudeCLI for several tasks, including tool call classification and scenario execution. It also runsgit initand LLM-generatedsetup_commandsduring the creation of test environments. The skill mitigates risks by usingshlex.splitfor argument parsing and avoidingshell=True, but the execution of commands derived from LLM interpretation of untrusted files remains a noteworthy security boundary.
Audit Metadata