thought-based-reasoning

Installation
SKILL.md

Thought-Based Reasoning Techniques for LLMs

Overview

Chain-of-Thought (CoT) prompting and its variants encourage LLMs to generate intermediate reasoning steps before arriving at a final answer, significantly improving performance on complex reasoning tasks. These techniques transform how models approach problems by making implicit reasoning explicit.

Quick Reference

Technique When to Use Complexity Accuracy Gain
Zero-shot CoT Quick reasoning, no examples available Low +20-60%
Few-shot CoT Have good examples, consistent format needed Medium +30-70%
Self-Consistency High-stakes decisions, need confidence Medium +10-20% over CoT
Tree of Thoughts Complex problems requiring exploration High +50-70% on hard tasks
Least-to-Most Multi-step problems with subproblems Medium +30-80%
ReAct Tasks requiring external information Medium +15-35%
PAL Mathematical/computational problems Medium +10-15%
Reflexion Iterative improvement, learning from errors High +10-20%
Related skills

More from zpankz/mcp-skillset

Installs
6
GitHub Stars
2
First Seen
Jan 26, 2026