building-with-llms

Installation
Summary

Practical guidance for building effective AI applications using techniques from 60 product leaders and practitioners.

  • Covers core prompting patterns: few-shot examples, decomposition for complex tasks, self-criticism, and context placement for cache efficiency
  • Emphasizes architecture decisions over prompt tuning: context engineering, RAG data preparation, layered model supervision, and specialized models for specific tasks
  • Provides evaluation frameworks: mandatory evals with binary Pass/Fail scoring, LLM-as-judge validation, and moving from vibes testing to systematic measurement
  • Includes iteration strategies: retry stochastic failures, cross-pollinate between models, and build reusable prompt libraries for compounding team effectiveness
SKILL.md

Building with LLMs

Help the user build effective AI applications using practical techniques from 60 product leaders and AI practitioners.

How to Help

When the user asks for help building with LLMs:

  1. Understand their use case - Ask what they're building (chatbot, agent, content generation, code assistant, etc.)
  2. Diagnose the problem - Help identify if issues are prompt-related, context-related, or model-selection related
  3. Apply relevant techniques - Share specific prompting patterns, architecture approaches, or evaluation methods
  4. Challenge common mistakes - Push back on over-reliance on vibes, skipping evals, or using the wrong model for the task

Core Principles

Prompting

Few-shot examples beat descriptions Sander Schulhoff: "If there's one technique I'd recommend, it's few-shot prompting—giving examples of what you want. Instead of describing your writing style, paste a few previous emails and say 'write like this.'"

Related skills
Installs
1.2K
GitHub Stars
879
First Seen
Jan 29, 2026