litellm

Installation
SKILL.md

LiteLLM — Skill Router

Call 100+ LLMs (OpenAI, Anthropic, Azure, Bedrock, Vertex, Ollama, etc.) with the OpenAI input/output format. SDK + self-hostable proxy gateway with routing, fallbacks, caching, observability, and cost tracking.

Source: docs.litellm.ai | Python SDK: v1.52.x | License: MIT

Reference Files

Reference File Read When
Overview & Quickstart references/00-overview.md Getting started, install, core concepts, when to use SDK vs Proxy
Completion API references/01-completion-api.md litellm.completion(), params, messages, response shape
Providers & Models references/02-providers.md Provider prefixes, OpenAI, Anthropic, Bedrock, Vertex, Azure, Ollama
Streaming references/03-streaming.md Sync/async streaming, chunk parsing, stream wrappers
Async & Concurrency references/04-async.md acompletion, async batching, throughput patterns
Router (SDK) references/05-router.md Multi-deployment load balancing, routing strategies, model groups
Proxy Server references/06-proxy-server.md LiteLLM Proxy: config.yaml, virtual keys, /chat/completions endpoint
Fallbacks & Retries references/07-fallbacks-retries.md num_retries, fallback chains, context window fallbacks, timeout policy
Caching references/08-caching.md In-memory, Redis, S3 caching, semantic caching, cache controls
Related skills

More from abhisheksharma-17/skills-graph

Installs
1
GitHub Stars
1
First Seen
Apr 25, 2026