mlflow-onboarding
MLflow Onboarding
MLflow supports two broad use cases that require different onboarding paths:
- GenAI applications and agents: LLM-powered apps, chatbots, RAG pipelines, tool-calling agents. Key MLflow features include tracing for observability, evaluation with LLM judges, and prompt management, among others.
- Traditional ML / deep learning models: scikit-learn, PyTorch, TensorFlow, XGBoost, etc. Key MLflow features include experiment tracking (parameters, metrics, artifacts), model logging, and model deployment, among others.
Determining which use case applies is the first and most important step. The onboarding path, quickstart tutorials, and integration steps differ significantly between the two.
Step 1: Determine the Use Case
Before recommending tutorials or integration steps, determine which use case the user is working on. Use the signals below, checking them in order. If the signals are ambiguous or absent, you MUST ask the user directly.
Signal 1: Check the Codebase
Search the user's project for imports and usage patterns that indicate the use case:
GenAI indicators (any of these suggest GenAI):
- Imports from LLM client libraries:
openai,anthropic,google.generativeai,langchain,langchain_openai,langgraph,llamaindex,litellm,autogen,crewai,dspy
More from panlm/mlflow-skills
mlflow-agent
>
1agent-evaluation
Use this when you need to EVALUATE OR IMPROVE or OPTIMIZE an existing LLM agent's output quality - including improving tool selection accuracy, answer quality, reducing costs, or fixing issues where the agent gives wrong/incomplete responses. Evaluates agents systematically using MLflow evaluation with datasets, scorers, and tracing. IMPORTANT - Always also load the instrumenting-with-mlflow-tracing skill before starting any work. Covers end-to-end evaluation workflow or individual components (tracing setup, dataset creation, scorer definition, evaluation execution).
1analyzing-mlflow-trace
Analyzes a single MLflow trace to answer a user query about it. Use when the user provides a trace ID and asks to debug, investigate, find issues, root-cause errors, understand behavior, or analyze quality. Triggers on "analyze this trace", "what went wrong with this trace", "debug trace", "investigate trace", "why did this trace fail", "root cause this trace".
1retrieving-mlflow-traces
Retrieves MLflow traces using CLI or Python API. Use when the user asks to get a trace by ID, find traces, filter traces by status/tags/metadata/execution time, query traces, or debug failed traces. Triggers on "get trace", "search traces", "find failed traces", "filter traces by", "traces slower than", "query MLflow traces".
1searching-mlflow-docs
Searches and retrieves MLflow documentation from the official docs site. Use when the user asks about MLflow features, APIs, integrations (LangGraph, LangChain, OpenAI, etc.), tracing, tracking, or requests to look up MLflow documentation. Triggers on "how do I use MLflow with X", "find MLflow docs for Y", "MLflow API for Z".
1querying-mlflow-metrics
Fetches aggregated trace metrics (token usage, latency, trace counts, quality evaluations) from MLflow tracking servers. Triggers on requests to show metrics, analyze token usage, view LLM costs, check usage trends, or query trace statistics.
1