llm-prompt-optimizer

Installation
SKILL.md

LLM Prompt Optimizer

Overview

This skill transforms weak, vague, or inconsistent prompts into precision-engineered instructions that reliably produce high-quality outputs from any LLM (Claude, Gemini, GPT-4, Llama, etc.). It applies systematic prompt engineering frameworks — from zero-shot to few-shot, chain-of-thought, and structured output patterns.

When to Use This Skill

  • Use when a prompt returns inconsistent, vague, or hallucinated results
  • Use when you need structured/JSON output from an LLM reliably
  • Use when designing system prompts for AI agents or chatbots
  • Use when you want to reduce token usage without sacrificing quality
  • Use when implementing chain-of-thought reasoning for complex tasks
  • Use when prompts work on one model but fail on another

Step-by-Step Guide

1. Diagnose the Weak Prompt

Related skills
Installs
62
GitHub Stars
37.3K
First Seen
Mar 6, 2026