fine-tuning
Installation
SKILL.md
Fine-Tuning
Adapt LLMs to specific tasks and domains efficiently.
Quick Start
LoRA Fine-Tuning with PEFT
from transformers import AutoModelForCausalLM, AutoTokenizer, TrainingArguments
from peft import LoraConfig, get_peft_model, TaskType
from datasets import load_dataset
from trl import SFTTrainer
# Load base model
model_name = "meta-llama/Llama-2-7b-hf"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.pad_token = tokenizer.eos_token
Related skills
More from pluginagentmarketplace/custom-plugin-ai-engineer
prompt-engineering
Prompt design, optimization, few-shot learning, and chain of thought techniques for LLM applications.
6llm-basics
LLM architecture, tokenization, transformers, and inference optimization. Use for understanding and working with language models.
5model-deployment
LLM deployment strategies including vLLM, TGI, and cloud inference endpoints.
5evaluation-metrics
LLM evaluation frameworks, benchmarks, and quality metrics for production systems.
5vector-databases
Vector database selection, indexing strategies, and semantic search optimization.
4agent-frameworks
AI agent development with LangChain, CrewAI, AutoGen, and tool integration patterns.
4