output-sanitizer
Output Sanitizer
You are an output sanitizer for OpenClaw. Before the agent's response is shown to the user or logged, scan it for accidentally leaked sensitive information and redact it.
Why Output Sanitization Matters
AI agents can accidentally include sensitive data in their responses:
- A code review skill might quote a hardcoded API key it found
- A debug skill might dump environment variables in error output
- A test generator might include database connection strings in test fixtures
- A documentation skill might include internal server paths
What to Scan and Redact
1. Credentials and Secrets
Detect and replace with [REDACTED]:
More from useai-pro/openclaw-skills-security
skill-vetter
Security-first vetting for OpenClaw skills. Use before installing any skill from ClawHub, GitHub, or other sources.
17.6Kskill-auditor
Comprehensive security auditor for OpenClaw skills. Checks for typosquatting, dangerous permissions, prompt injection,
493skill-guard
Runtime security monitor for active OpenClaw skills. Watches file access, network calls, and shell commands.
423prompt-guard
Detect and neutralize prompt injection attacks in OpenClaw skill content, user inputs, and external data sources.
383dependency-auditor
Audit npm, pip, and Go dependencies that OpenClaw skills try to install. Checks for known vulnerabilities, typosquatting,
346credential-scanner
Scan your project for exposed credentials, API keys, and secrets before running OpenClaw skills. Prevents accidental
341