mlx-fine-tuning

Installation
SKILL.md

MLX Fine-Tuning

Comprehensive skill for fine-tuning Large Language Models using MLX framework on Apple Silicon (M1/M2/M3/M4).

Purpose

Enable efficient LLM fine-tuning on Apple Silicon using MLX's unified memory architecture and Metal GPU acceleration. Focus on LoRA (Low-Rank Adaptation) for parameter-efficient fine-tuning without requiring expensive GPU hardware.

When to Use This Skill

Invoke this skill when:

  • Setting up MLX fine-tuning on Apple Silicon
  • Converting models from HuggingFace to MLX format
  • Configuring LoRA adapters for fine-tuning
  • Optimizing hyperparameters for specific datasets
  • Troubleshooting memory or performance issues
  • Benchmarking fine-tuned models
  • Managing and exporting adapters
Related skills
Installs
4
Repository
89jobrien/steve
GitHub Stars
4
First Seen
Mar 9, 2026