nemoclaw-configure-inference

Pass

Audited by Gen Agent Trust Hub on Apr 7, 2026

Risk Level: SAFE
Full Analysis
  • [SAFE]: The skill provides documentation and command-line instructions for managing LLM inference routing using the openshell and nemoclaw CLI tools. No malicious behavior or security risks were identified.
  • [SAFE]: Instructions include setting environment variables for API keys (e.g., OPENAI_API_KEY, ANTHROPIC_API_KEY), which is a standard and secure practice for credential management. No hardcoded secrets or credentials are present.
  • [SAFE]: The documentation explicitly describes a security architecture where inference credentials remain on the host machine, and the sandbox communicates via a local proxy (inference.local), preventing sensitive data exposure to the agent sandbox.
  • [SAFE]: Mentions of external software installation (e.g., Ollama via Homebrew) and container management (NIM) are contextual to the onboarding process of the developer tool and do not involve suspicious remote execution patterns.
Audit Metadata
Risk Level
SAFE
Analyzed
Apr 7, 2026, 04:30 AM