qlora

Installation
SKILL.md

QLoRA: Quantized Low-Rank Adaptation

QLoRA enables fine-tuning of large language models on consumer GPUs by combining 4-bit quantization with LoRA adapters. A 65B model can be fine-tuned on a single 48GB GPU while matching 16-bit fine-tuning performance.

Prerequisites: This skill assumes familiarity with LoRA. See the lora skill for LoRA fundamentals (LoraConfig, target_modules, training patterns).

Table of Contents

Core Innovations

Related skills
Installs
14
GitHub Stars
24
First Seen
Feb 10, 2026