prompt-engineering

Pass

Audited by Gen Agent Trust Hub on Apr 12, 2026

Risk Level: SAFE
Full Analysis
  • [SAFE]: The skill consists entirely of instructional text and template examples for writing LLM prompts, including system prompts, few-shot examples, and chain-of-thought techniques.
  • [SAFE]: No external dependencies, network operations, or sensitive file accesses were detected.
  • [SAFE]: The skill includes best practices for security, specifically advising users on how to mitigate prompt injection risks by using clear delimiters for user input.
Audit Metadata
Risk Level
SAFE
Analyzed
Apr 12, 2026, 09:43 PM