Qwen-Ollama

Installation
SKILL.md

Qwen via Ollama

Local LLM inference using Qwen 2.5 models through Ollama. Enables text analysis, summarization, code generation, and structured data analysis without cloud dependencies.

Instructions

When helping users with Ollama and Qwen models, follow these guidelines:

  1. Installation First: Always verify Ollama is installed and the desired model is pulled before attempting API calls
  2. Use Appropriate Model Size: Recommend qwen2.5:7b for balanced performance (4.7 GB), or smaller/larger based on available resources
  3. Set Proper Timeouts: Default 120s timeout for analysis tasks, longer for complex generation
  4. Handle Streaming: Use "stream": false for simple cases, streaming for real-time feedback
  5. System Prompts: Define personality and role in system message for consistent behavior
  6. Validate Responses: Always check the done field and handle partial responses appropriately

Examples

Example 1: Basic Installation and Setup

Related skills

More from majiayu000/claude-skill-registry

Installs
2
GitHub Stars
299
First Seen
Feb 5, 2026