addon-langchain-llm
Add-on: LangChain LLM
Use this skill when an existing project needs LangChain primitives for chat, retrieval, or summarization.
Compatibility
- Works with
architect-python-uv-fastapi-sqlalchemy,architect-python-uv-batch, andarchitect-nextjs-bun-app. - Can be combined with
addon-rag-ingestion-pipeline. - Can be combined with
addon-langgraph-agentwhen graph orchestration is required. - Can be combined with
addon-llm-judge-evals; when used together, declarelangchaininconfig/skill_manifest.jsonso the judge runner can resolve the backend without guessing.
Inputs
Collect:
LLM_PROVIDER:openai|anthropic|ollama.DEFAULT_MODEL: provider model id.ENABLE_STREAMING:yes|no(defaultyes).USE_RAG:yes|no.MAX_INPUT_TOKENS: default8000.
More from ajrlewis/ai-skills
architect-python-uv-fastapi-sqlalchemy
Use when scaffolding production-ready FastAPI services with uv, SQLAlchemy, Alembic, Postgres, Docker, and CI gates.
11addon-rag-ingestion-pipeline
Use when adding multi-format RAG ingest, chunk, embed, and retrieval pipelines; pair with architect-python-uv-batch or architect-python-uv-fastapi-sqlalchemy.
11addon-docling-legal-chunk-embed
Use when you need legal PDF to markdown extraction plus clause chunking and embedding prep; pair with addon-rag-ingestion-pipeline and architect-python-uv-batch.
10addon-llm-ancient-greek-translation
Use when adding Koine or Attic Greek translation to Next.js content flows; pair with ui-editorial-writing-surface and addon-nostr-nip23-longform.
10architect-python-uv-batch
Use when scaffolding production-ready Python uv batch or worker projects with Docker required by default.
10addon-human-pr-review-gate
Use when agent-generated code must pass a human PR review gate with trusted checks and merge blocks; pair with addon-decision-justification-ledger and architect-stack-selector.
9