oma-orchestrator

Fail

Audited by Gen Agent Trust Hub on May 11, 2026

Risk Level: HIGHCOMMAND_EXECUTION
Full Analysis
  • [COMMAND_EXECUTION]: The skill is configured to explicitly disable human-in-the-loop security checks for various AI CLI vendors.
  • In config/cli-config.yaml, the auto_approve_flag for the claude vendor is set to --dangerously-skip-permissions.
  • In config/cli-config.yaml, the auto_approve_flag for the gemini vendor is set to --approval-mode=yolo.
  • The codex and qwen vendors are configured with --full-auto and --yolo respectively.
  • These settings allow the orchestrator to execute sensitive operations via these tools without requiring explicit user consent or review.
  • [COMMAND_EXECUTION]: The orchestrator manages subagent lifecycles by dynamically constructing and executing shell commands based on internal templates and configuration.
  • SKILL.md defines native execution paths for multiple CLIs, including claude --agent <agent>, codex exec "@agent ...", and gemini -p "@agent ...".
  • Shell scripts in the scripts/ directory, such as spawn-agent.sh, parallel-run.sh, and verify.sh, use exec to run the oma CLI with arguments passed through from the orchestrator.
  • [INDIRECT_PROMPT_INJECTION]: The skill ingests untrusted data from task descriptions and acceptance criteria to generate instructions for subagents, creating a vulnerability surface.
  • Ingestion points: Task descriptions and acceptance criteria are read from task-board.md and configuration files as specified in SKILL.md.
  • Boundary markers: resources/subagent-prompt-template.md uses markdown headers and horizontal separators to delimit instructions.
  • Capability inventory: The skill can spawn subprocesses (scripts/spawn-agent.sh), write to the local filesystem (resources/memory-schema.md), and call external CLI tools.
  • Sanitization: There is no evidence of sanitization or escaping for the {TASK_DESCRIPTION} or {ACCEPTANCE_CRITERIA} placeholders before they are interpolated into the subagent prompt template, potentially allowing malicious task data to influence subagent behavior.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
May 11, 2026, 09:18 PM