mlflow-onboarding
MLflow Onboarding
MLflow supports two broad use cases that require different onboarding paths:
- GenAI applications and agents: LLM-powered apps, chatbots, RAG pipelines, tool-calling agents. Key MLflow features include tracing for observability, evaluation with LLM judges, and prompt management, among others.
- Traditional ML / deep learning models: scikit-learn, PyTorch, TensorFlow, XGBoost, etc. Key MLflow features include experiment tracking (parameters, metrics, artifacts), model logging, and model deployment, among others.
Determining which use case applies is the first and most important step. The onboarding path, quickstart tutorials, and integration steps differ significantly between the two.
Step 1: Determine the Use Case
Before recommending tutorials or integration steps, determine which use case the user is working on. Use the signals below, checking them in order. If the signals are ambiguous or absent, you MUST ask the user directly.
Signal 1: Check the Codebase
Search the user's project for imports and usage patterns that indicate the use case:
GenAI indicators (any of these suggest GenAI):
- Imports from LLM client libraries:
openai,anthropic,google.generativeai,langchain,langchain_openai,langgraph,llamaindex,litellm,autogen,crewai,dspy
More from mlflow/skills
searching-mlflow-docs
Searches and retrieves MLflow documentation from the official docs site. Use when the user asks about MLflow features, APIs, integrations (LangGraph, LangChain, OpenAI, etc.), tracing, tracking, or requests to look up MLflow documentation. Triggers on "how do I use MLflow with X", "find MLflow docs for Y", "MLflow API for Z".
290agent-evaluation
Use this when you need to EVALUATE OR IMPROVE or OPTIMIZE an existing LLM agent's output quality - including improving tool selection accuracy, answer quality, reducing costs, or fixing issues where the agent gives wrong/incomplete responses. Evaluates agents systematically using MLflow evaluation with datasets, scorers, and tracing. IMPORTANT - Always also load the instrumenting-with-mlflow-tracing skill before starting any work. Covers end-to-end evaluation workflow or individual components (tracing setup, dataset creation, scorer definition, evaluation execution).
276instrumenting-with-mlflow-tracing
Instruments Python and TypeScript code with MLflow Tracing for observability. Must be loaded when setting up tracing as part of any workflow including agent evaluation. Triggers on adding tracing, instrumenting agents/LLM apps, getting started with MLflow tracing, tracing specific frameworks (LangGraph, LangChain, OpenAI, DSPy, CrewAI, AutoGen), or when another skill references tracing setup. Examples - "How do I add tracing?", "Instrument my agent", "Trace my LangChain app", "Set up tracing for evaluation
261analyzing-mlflow-session
Analyzes an MLflow session — a sequence of traces from a multi-turn chat conversation or interaction. Use when the user asks to debug a chat conversation, review session or chat history, find where a multi-turn chat went wrong, or analyze patterns across turns. Triggers on "analyze this session", "what happened in this conversation", "debug session", "review chat history", "where did this chat go wrong", "session traces", "analyze chat", "debug this chat".
248retrieving-mlflow-traces
Retrieves MLflow traces using CLI or Python API. Use when the user asks to get a trace by ID, find traces, filter traces by status/tags/metadata/execution time, query traces, or debug failed traces. Triggers on "get trace", "search traces", "find failed traces", "filter traces by", "traces slower than", "query MLflow traces".
246analyzing-mlflow-trace
Analyzes a single MLflow trace to answer a user query about it. Use when the user provides a trace ID and asks to debug, investigate, find issues, root-cause errors, understand behavior, or analyze quality. Triggers on "analyze this trace", "what went wrong with this trace", "debug trace", "investigate trace", "why did this trace fail", "root cause this trace".
245