mlx-fine-tuning
MLX Fine-Tuning
Comprehensive skill for fine-tuning Large Language Models using MLX framework on Apple Silicon (M1/M2/M3/M4).
Purpose
Enable efficient LLM fine-tuning on Apple Silicon using MLX's unified memory architecture and Metal GPU acceleration. Focus on LoRA (Low-Rank Adaptation) for parameter-efficient fine-tuning without requiring expensive GPU hardware.
When to Use This Skill
Invoke this skill when:
- Setting up MLX fine-tuning on Apple Silicon
- Converting models from HuggingFace to MLX format
- Configuring LoRA adapters for fine-tuning
- Optimizing hyperparameters for specific datasets
- Troubleshooting memory or performance issues
- Benchmarking fine-tuned models
- Managing and exporting adapters
More from 89jobrien/steve
meta-cognitive-reasoning
Meta-cognitive reasoning specialist for evidence-based analysis, hypothesis
176dead-code-removal
Detects and safely removes unused code (imports, functions, classes)
143file-converter
This skill handles file format conversions across documents (PDF, DOCX,
80python-scripting
Python scripting with uv and PEP 723 inline dependencies. Use when creating
77network-engineering
Network architecture, troubleshooting, and infrastructure patterns. Use
66golang-performance
Go performance optimization techniques including profiling with pprof,
57