aiconfig-ai-metrics
Pass
Audited by Gen Agent Trust Hub on May 7, 2026
Risk Level: SAFE
Full Analysis
- [SAFE]: The skill provides legitimate documentation and code examples for the LaunchDarkly server-side AI SDKs. All external resources, including NPM and PyPI packages (e.g.,
launchdarkly-server-sdk-ai,@launchdarkly/server-sdk-ai-openai), and API endpoints (e.g.,app.launchdarkly.com), are official vendor-owned assets and align with the skill's stated purpose. - [SAFE]: Code examples demonstrate security best practices, such as retrieving API tokens and keys from environment variables (
os.environ,process.env) rather than hardcoding them. - [SAFE]: The skill includes explicit safety checks, such as verifying the
config.enabledflag before making model invocations, which serves as a remote kill-switch for AI features. - [SAFE]: No instances of prompt injection, obfuscation, data exfiltration, or unauthorized command execution were found. All referenced third-party libraries (e.g.,
openai,anthropic,boto3,google-genai,langchain) are standard tools for AI development.
Audit Metadata