tracking-model-versions
Installation
SKILL.md
Model Versioning Tracker
Overview
Track and manage AI/ML model versions using MLflow, DVC, or Weights & Biases. Log model metadata (hyperparameters, training data hash, framework version), record evaluation metrics (accuracy, F1, latency), manage model registry transitions (Staging, Production, Archived), and generate model cards documenting lineage and performance.
Prerequisites
- MLflow tracking server running locally or remotely (
mlflow serveror managed MLflow) - Python 3.9+ with
mlflow,pandas, and the relevant ML framework installed - Model artifacts accessible on the local filesystem or cloud storage (S3, GCS)
- Write access to the MLflow tracking URI and artifact store
Instructions
- Connect to the MLflow tracking server by setting
MLFLOW_TRACKING_URIand verify connectivity withmlflow experiments list. - Create or select an MLflow experiment for the model project using
mlflow experiments create --experiment-name <name>. - Log a new model version: start an MLflow run, log parameters (learning rate, epochs, batch size), log metrics (accuracy, loss, F1 score), and log the model artifact with
mlflow.<flavor>.log_model(). - Register the model in the MLflow Model Registry using
mlflow.register_model()with the run URI and a descriptive model name. - Transition the model version through stages:
None->Staging->Productionusingclient.transition_model_version_stage(). Archive previous production versions.
Related skills