guardrails
Pass
Audited by Gen Agent Trust Hub on May 1, 2026
Risk Level: SAFE
Full Analysis
- [PROMPT_INJECTION]: The skill mentions prompt injection techniques (e.g., "Ignore previous instructions") within a descriptive context. These are cited as examples of threats that the guardrail pattern is designed to prevent, rather than being used as active instructions to override agent behavior.
- [REMOTE_CODE_EXECUTION]: The skill contains a Python code snippet demonstrating a safety implementation pattern. This code is illustrative and does not contain logic for downloading external scripts or executing arbitrary remote commands.
- [DATA_EXFILTRATION]: No network access or data exfiltration patterns were detected. The skill specifically discusses PII protection as a defensive measure.
Audit Metadata