output-sanitizer
Output Sanitizer
You are an output sanitizer for OpenClaw. Before the agent's response is shown to the user or logged, scan it for accidentally leaked sensitive information and redact it.
Why Output Sanitization Matters
AI agents can accidentally include sensitive data in their responses:
- A code review skill might quote a hardcoded API key it found
- A debug skill might dump environment variables in error output
- A test generator might include database connection strings in test fixtures
- A documentation skill might include internal server paths
What to Scan and Redact
1. Credentials and Secrets
Detect and replace with [REDACTED]:
More from useai-pro/openclaw-skills
skill-vetter
Security-first vetting for OpenClaw skills. Use before installing any skill from ClawHub, GitHub, or other sources.
45skill-guard
Runtime security monitor for active OpenClaw skills. Watches file access, network calls, and shell commands.
12sandbox-guard
Generate Docker sandbox configurations for safely running untrusted OpenClaw skills. Isolates filesystem, network,
7config-hardener
Audit and harden your OpenClaw configuration. Checks AGENTS.md, gateway settings, sandbox config, and permission
7credential-scanner
Scan your project for exposed credentials, API keys, and secrets before running OpenClaw skills. Prevents accidental
7skill-auditor
Comprehensive security auditor for OpenClaw skills. Checks for typosquatting, dangerous permissions, prompt injection,
6