gcp-agent-safety-gatekeeper
gcp-agent-safety-gatekeeper
This skill implements the Python integration layer for Model Armor. Grounded in security_blog.md, it provides the safety_util functions needed to intercept prompts, sanitize them against your security policy, and handle safety triggers in your FastAPI backend.
Usage
Ask Antigravity to:
- "Add a safety gatekeeper to my agent backend"
- "Implement Model Armor prompt sanitization in Python"
- "Create a safety utility to parse Model Armor findings"
- "Handle prompt injection errors in my FastAPI app"
Integration Pattern
- Client Initialization: Configures the
ModelArmorClientwith the correct regional endpoint. safety_util.py: A robust parser that convertsSanitizeUserPromptResponseinto a list of human-readable security triggers (e.g., "Prompt Injection", "PII: Person names").- Application Interception: Logic to block or sanitize prompts before they reach the GenAI model or agent orchestrator.
Boilerplate Implementation
More from googlecloudplatform/devrel-demos
go-backend-dev
Specialist in implementing robust HTTP services and APIs in Go. Activates for "endpoint", "handler", "API", "server".
41go-reviewer
Expert code reviewer focusing on idiomatic Go, concurrency safety, and clean code principles. Activates for "review", "idiomatic", "refactor".
41go-architect
Expert in Go project scaffolding, standard layout compliance, and dependency management. Activates for "new project", "structure", "layout".
36go-test-expert
Expert in Go testing patterns, table-driven tests, httptest, benchmarking, and fuzzing. Activates for "test", "fail", "benchmark", "debug", "fuzz".
35latest-software-version
>
34go-project-setup
>
26