ai-guardrails

Installation
SKILL.md

AI Guardrails

Overview

Add safety layers to AI applications — input validation, prompt injection detection, output filtering, content moderation, and policy enforcement. Prevent misuse without breaking legitimate use cases.

Instructions

Defense layers

User Input → Input Guardrails → LLM → Output Guardrails → User Response
                 │                          │
                 ├─ Prompt injection check   ├─ Content policy check
                 ├─ PII detection            ├─ Hallucination detection
                 ├─ Topic restrictions        ├─ PII scrubbing
                 └─ Rate limiting             └─ Schema validation
Related skills
Installs
2
GitHub Stars
47
First Seen
Mar 16, 2026