skills/azhi-ss/ljg-skills/ljg-learn/Gen Agent Trust Hub

ljg-learn

Pass

Audited by Gen Agent Trust Hub on May 12, 2026

Risk Level: SAFEPROMPT_INJECTIONCOMMAND_EXECUTION
Full Analysis
  • [PROMPT_INJECTION]: The skill is susceptible to indirect prompt injection because it uses unvalidated user input (the concept name) to construct a file path for saving reports, creating a surface for path traversal attacks.\n
  • Ingestion points: User-supplied concept name used in the filename template.\n
  • Boundary markers: None present to distinguish the concept name from the rest of the file path.\n
  • Capability inventory: The agent is instructed to execute shell commands and write files.\n
  • Sanitization: No logic is provided to sanitize the concept name (e.g., preventing '../' or shell characters) before it is interpolated into the shell command or file path.\n- [COMMAND_EXECUTION]: The skill instructs the agent to execute a shell command (date +%Y%m%dT%H%M%S) to generate a timestamp for the filename. While this specific command is benign, it establishes a mechanism for command execution that relies on interpolated data.
Audit Metadata
Risk Level
SAFE
Analyzed
May 12, 2026, 07:13 PM