skills/semgrep/skills/llm-security/Gen Agent Trust Hub

llm-security

Pass

Audited by Gen Agent Trust Hub on Apr 25, 2026

Risk Level: SAFE
Full Analysis
  • [SAFE]: The skill is a set of security documentation and rules intended to guide agents and developers in securing LLM applications. It does not contain executable malicious code or dangerous instructions.
  • [CREDENTIALS_UNSAFE]: Multiple files (rules/system-prompt-leakage.md, rules/sensitive-disclosure.md) contain hardcoded database connection strings and dummy API keys. These are explicitly labeled as 'Vulnerable' examples for educational purposes and do not represent actual credentials or exfiltration attempts.
  • [COMMAND_EXECUTION]: The skill examples reference 'subprocess.run' and 'shlex' in rules/output-handling.md and rules/excessive-agency.md. These are part of secure coding tutorials demonstrating how to properly parameterize and sandbox command execution to prevent injection, rather than being used for malicious purposes within the skill itself.
  • [REMOTE_CODE_EXECUTION]: In rules/supply-chain.md, the skill documents the dangers of using 'pickle' and 'trust_remote_code=True' when loading models, providing 'safetensors' as a secure alternative. These are instructional patterns.
  • [PROMPT_INJECTION]: The skill body contains detection patterns for prompt injection (e.g., regex for 'ignore previous instructions'). These are used to teach the agent how to implement guardrails for the user's application.
Audit Metadata
Risk Level
SAFE
Analyzed
Apr 25, 2026, 10:14 PM