addon-direct-llm-sdk
Add-on: Direct LLM SDK
Use this skill when a project needs explicit provider SDK control for chat, completions, embeddings, or tool calls without an additional orchestration framework.
Compatibility
- Works with
architect-python-uv-fastapi-sqlalchemy,architect-python-uv-batch,architect-nextjs-bun-app, andarchitect-next-prisma-bun-vector. - Use this instead of
addon-langchain-llmwhen abstraction overhead is not wanted. - If paired with
addon-llm-judge-evals, do not assume auto backend resolution; the current judge contract must be extended before direct SDK becomes a supported judge backend.
Inputs
Collect:
SDK_PROVIDER:openai|anthropic|google|openrouter.DEFAULT_MODEL: provider model id.ENABLE_STREAMING:yes|no(defaultyes).REQUEST_TIMEOUT_SECONDS: default60.MAX_RETRIES: default2.
More from ajrlewis/ai-skills
architect-python-uv-fastapi-sqlalchemy
Use when scaffolding production-ready FastAPI services with uv, SQLAlchemy, Alembic, Postgres, Docker, and CI gates.
11addon-rag-ingestion-pipeline
Use when adding multi-format RAG ingest, chunk, embed, and retrieval pipelines; pair with architect-python-uv-batch or architect-python-uv-fastapi-sqlalchemy.
11addon-docling-legal-chunk-embed
Use when you need legal PDF to markdown extraction plus clause chunking and embedding prep; pair with addon-rag-ingestion-pipeline and architect-python-uv-batch.
10addon-llm-ancient-greek-translation
Use when adding Koine or Attic Greek translation to Next.js content flows; pair with ui-editorial-writing-surface and addon-nostr-nip23-longform.
10architect-python-uv-batch
Use when scaffolding production-ready Python uv batch or worker projects with Docker required by default.
10addon-human-pr-review-gate
Use when agent-generated code must pass a human PR review gate with trusted checks and merge blocks; pair with addon-decision-justification-ledger and architect-stack-selector.
9