ai-governance

Pass

Audited by Gen Agent Trust Hub on May 1, 2026

Risk Level: SAFE
Full Analysis
  • [PROMPT_INJECTION]: The skill does not contain instructions that attempt to override agent behavior or bypass safety filters. Instead, it provides defensive guidelines against prompt injection as part of its governance framework.
  • [DATA_EXFILTRATION]: No commands for accessing sensitive local files or transmitting data to external servers were found. The skill actively discourages passing sensitive information like PII into AI prompts.
  • [OBFUSCATION]: No obfuscated content, encoded strings, zero-width characters, or hidden text patterns were detected in the instructions or metadata.
  • [REMOTE_CODE_EXECUTION]: There are no patterns involving the download or execution of remote scripts or unverified third-party packages.
  • [COMMAND_EXECUTION]: The skill includes code snippets for configuration and documentation purposes, but it does not instruct the agent to execute dangerous shell commands or modify system settings.
  • [CREDENTIALS_UNSAFE]: No hardcoded API keys, tokens, or other secrets were found. The instructions emphasize the importance of not granting agents access to secrets.
  • [PRIVILEGE_ESCALATION]: No attempts to acquire elevated permissions or bypass security controls were identified.
  • [PERSISTENCE_MECHANISMS]: The skill does not contain commands that attempt to establish persistence on the host system.
  • [METADATA_POISONING]: The metadata correctly reflects the purpose of the skill and does not contain deceptive instructions.
  • [INDIRECT_PROMPT_INJECTION]: While the skill discusses the risk of indirect prompt injection in the context of LLM01, it does not itself create a vulnerable ingestion surface for untrusted data.
Audit Metadata
Risk Level
SAFE
Analyzed
May 1, 2026, 05:52 PM