lora

Installation
SKILL.md

Using LoRA for Fine-tuning

LoRA (Low-Rank Adaptation) enables efficient fine-tuning by freezing pretrained weights and injecting small trainable matrices into transformer layers. This reduces trainable parameters to ~0.1% of the original model while maintaining performance.

Table of Contents

Core Concepts

How LoRA Works

Related skills
Installs
17
GitHub Stars
24
First Seen
Feb 10, 2026