LLM Prompt Optimizer

Installation
SKILL.md

LLM Prompt Optimizer

The LLM Prompt Optimizer skill systematically analyzes and refines prompts to maximize the quality, accuracy, and relevance of large language model outputs. It applies evidence-based optimization techniques including structural improvements, context enrichment, constraint calibration, and output format specification.

This skill goes beyond basic prompt writing by leveraging understanding of how different LLMs process instructions, their attention patterns, and their response tendencies. It helps you transform underperforming prompts into high-yield instructions that consistently produce the results you need.

Whether you are building production AI systems, conducting research, or simply want better ChatGPT responses, this skill ensures your prompts are optimized for your specific model and use case.

Core Workflows

Workflow 1: Analyze and Diagnose Prompt Issues

  1. Receive the current prompt and sample outputs
  2. Identify failure patterns:
    • Hallucination triggers
    • Ambiguity sources
    • Missing context gaps
    • Conflicting instructions
    • Over/under-constrained parameters
  3. Map issues to specific prompt segments
Related skills
Installs
First Seen