research-lookup

Fail

Audited by Gen Agent Trust Hub on Apr 16, 2026

Risk Level: CRITICALREMOTE_CODE_EXECUTIONPROMPT_INJECTIONCOMMAND_EXECUTION
Full Analysis
  • [REMOTE_CODE_EXECUTION]: The skill's setup instructions and fallback logic command the agent to execute curl -fsSL https://parallel.ai/install.sh | bash. This pattern allows an unverified remote source to execute arbitrary code with the agent's privileges, bypassing security reviews of the local codebase.
  • File: SKILL.md, README.md
  • Evidence: curl -fsSL https://parallel.ai/install.sh | bash
  • [PROMPT_INJECTION]: The skill retrieves and processes untrusted data from web and academic searches (via Perplexity and Parallel APIs) without using delimiters or sanitization. Combined with the agent's 'Bash', 'Write', and 'Edit' capabilities, this creates a high risk of indirect prompt injection where malicious content in search results could hijack agent behavior.
  • Ingestion Points: Web search results via parallel-cli search, Perplexity Sonar API, and Parallel Chat API.
  • Capability Inventory: Access to Bash, Read, Write, and Edit tools as defined in SKILL.md.
  • Sanitization: Absent; no validation or escaping is performed on external data in research_lookup.py.
  • [COMMAND_EXECUTION]: The script scripts/generate_schematic.py invokes subprocesses to run secondary scripts with arguments derived from user prompts.
  • File: scripts/generate_schematic.py
  • Evidence: subprocess.run(cmd, check=False, env=env) where cmd includes the user-provided prompt argument.
  • While using a list-based invocation, the flow of unverified user input into system-level commands increases the risk of exploitation if combined with other vulnerabilities.
Recommendations
  • HIGH: Downloads and executes remote code from: https://parallel.ai/install.sh - DO NOT USE without thorough review
  • AI detected serious security threats
Audit Metadata
Risk Level
CRITICAL
Analyzed
Apr 16, 2026, 10:50 PM