agent-governance

Installation
Summary

Declarative policies, intent classification, and audit trails for controlling AI agent tool access and behavior.

  • Composable governance policies define allowed/blocked tools, content filters, rate limits, and approval requirements — stored as configuration, not code
  • Semantic intent classification detects dangerous prompts (data exfiltration, privilege escalation, prompt injection) before tool execution using pattern-based signals
  • Tool-level governance decorator enforces policies at function call time with rate limiting, content checking, and audit logging
  • Trust scoring with temporal decay tracks agent reliability in multi-agent systems, gating sensitive operations based on historical success rates
  • Append-only audit trails capture all governance events (allowed, denied, errors) for compliance and security review
  • Works with any agent framework: PydanticAI, CrewAI, OpenAI Agents, LangChain, AutoGen
SKILL.md

Agent Governance Patterns

Patterns for adding safety, trust, and policy enforcement to AI agent systems.

Overview

Governance patterns ensure AI agents operate within defined boundaries — controlling which tools they can call, what content they can process, how much they can do, and maintaining accountability through audit trails.

User Request → Intent Classification → Policy Check → Tool Execution → Audit Log
                     ↓                      ↓               ↓
              Threat Detection         Allow/Deny      Trust Update

When to Use

  • Agents with tool access: Any agent that calls external tools (APIs, databases, shell commands)
  • Multi-agent systems: Agents delegating to other agents need trust boundaries
  • Production deployments: Compliance, audit, and safety requirements
Related skills

More from github/awesome-copilot

Installs
8.9K
GitHub Stars
32.8K
First Seen
Feb 19, 2026