skills/maxmurr/skills/prompt-caching/Gen Agent Trust Hub

prompt-caching

Pass

Audited by Gen Agent Trust Hub on Apr 27, 2026

Risk Level: SAFE
Full Analysis
  • [EXTERNAL_DOWNLOADS]: The skill utilizes and provides configuration examples for official software development kits and API endpoints from Anthropic, OpenAI, and Google. These are well-known technology services and their use is consistent with the skill's purpose.
  • [INDIRECT_PROMPT_INJECTION]: The skill establishes an attack surface by instructing agents to process and structure untrusted external data, such as reference documents and user message history, for inclusion in LLM prompts. 1. Ingestion points: Data enters the context via the 'contents' and 'messages' structures described in the provider-specific reference files. 2. Boundary markers: The skill emphasizes the use of structured API message formats (system, user, assistant roles) which provides robust logical separation compared to raw text interpolation. 3. Capability inventory: The skill's scope is confined to prompt optimization; no arbitrary command execution, network exfiltration, or unauthorized file system operations are present in the provided materials. 4. Sanitization: The procedure focus is on structural prefix-matching for performance and does not explicitly detail content-based filtering or sanitization.
  • [CREDENTIALS_UNSAFE]: Documentation and code examples utilize environment variable placeholders (e.g., $ANTHROPIC_API_KEY) for authentication rather than hardcoded secrets, adhering to standard security practices for API integration.
Audit Metadata
Risk Level
SAFE
Analyzed
Apr 27, 2026, 05:51 PM