unsloth-fft

Installation
SKILL.md

Overview

Full Fine-Tuning (FFT) in Unsloth allows for 100% exact weight updates, bypassing the low-rank approximations of LoRA. By utilizing Unsloth's optimized gradient checkpointing, FFT can fit significantly larger batch sizes while ensuring total model modification.

When to Use

  • When performing base model pre-training or continued pre-training on large datasets.
  • When model-wide behaviors need modification that adapters (LoRA) cannot fully capture.
  • When sufficient VRAM is available to handle full model gradients.

Decision Tree

  1. Do you need to modify 100% of the model weights?
    • Yes: Proceed with FFT.
    • No: Use [[unsloth-lora]].
  2. Is VRAM limited (e.g., < 24GB for a 7B model)?
    • Yes: Enable use_gradient_checkpointing = 'unsloth' and adamw_8bit.
    • No: Use standard BF16 and high batch sizes.

Workflows

Initializing Full Fine-tuning

Related skills

More from cuba6112/skillfactory

Installs
2
First Seen
Feb 9, 2026