ollama

Installation
SKILL.md

Ollama

Overview

Ollama makes running large language models locally as simple as ollama run llama3. No cloud API, no API keys, no per-token costs — models run entirely on your hardware. It supports LLaMA 3, Mistral, Phi, Gemma, CodeLlama, and dozens of other open models. This skill covers model management, API integration, custom model configuration, GPU setup, and building applications with local LLMs.

Instructions

Step 1: Installation

# Linux
curl -fsSL https://ollama.com/install.sh | sh

# macOS
brew install ollama

# Docker
docker run -d --gpus all -v ollama_data:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
Related skills
Installs
1
GitHub Stars
47
First Seen
Mar 13, 2026