arize-prompt-optimization
Pass
Audited by Gen Agent Trust Hub on May 5, 2026
Risk Level: SAFE
Full Analysis
- [PROMPT_INJECTION]: The skill presents a surface for indirect prompt injection. It involves exporting production trace data, evaluations, and human annotations which are then incorporated into a 'meta-prompt' used to generate optimized versions of an LLM prompt.
- Ingestion points: JSON data exported from the Arize platform via the ax CLI (e.g., trace_/spans.json, dataset_/examples.json, experiment_*/runs.json).
- Boundary markers: The meta-prompt template uses section headers and separator lines (e.g., '========================') to distinguish between the instructions and the performance data.
- Capability inventory: The skill utilizes the ax CLI for data operations and jq for processing JSON files.
- Sanitization: No specific sanitization or filtering of the external trace data is implemented before it is interpolated into the meta-prompt.
- [CREDENTIALS_UNSAFE]: The skill includes extensive guidance on secure credential management. It explicitly instructs the agent and the user to avoid hardcoding API keys, never to read .env files directly, and to use environment variables ($ARIZE_API_KEY) when configuring ax CLI profiles. These are established security best practices.
- [EXTERNAL_DOWNLOADS]: The skill references the installation of the 'arize-ax-cli' tool via standard package managers (uv, pipx, pip). This is an official resource from the skill's author (Arize-ai) and is used for its intended purpose. All network operations are directed towards the Arize platform (app.arize.com).
Audit Metadata