llm-provider-usage-statistics

Installation
SKILL.md

LLM Provider Usage Statistics

Reference documentation for how different LLM providers report token usage.

Quick Reference: Token Counting Semantics

Provider input_tokens meaning Cache tokens Must add cache to get total?
OpenAI TOTAL (includes cached) cached_tokens is subset No
Anthropic NON-cached only cache_read_input_tokens + cache_creation_input_tokens Yes
Gemini TOTAL (includes cached) cached_content_token_count is subset No

Critical difference: Anthropic's input_tokens excludes cached tokens, so you must add them:

total_input = input_tokens + cache_read_input_tokens + cache_creation_input_tokens

Quick Reference: Prefix Caching

Related skills

More from letta-ai/letta

Installs
3
Repository
letta-ai/letta
GitHub Stars
21.8K
First Seen
Mar 13, 2026