human-writing
User request: $ARGUMENTS
Apply these research-backed writing principles to the current task. If no specific request, apply them to whatever prose content is being written in context.
The Core Insight
The fundamental problem is statistical uniformity. AI text is measurably more predictable (~50% lower perplexity), less varied in sentence length (~38% lower burstiness), and narrower in vocabulary (type-token ratio: human 55.3 vs AI 45.5). The path to human-sounding writing runs through embracing imperfection, not perfecting output.
The single most reliable tell is uniformity. Human writing is messy, varied, and surprising. AI writing is smooth, consistent, and predictable.
The 10-20-70 Rule
Prompting contributes ~10% of output quality, editing ~20%, and the writer's own domain expertise and input ~70%. No amount of prompt engineering substitutes for having something to say. Require the writer's genuine insight, opinions, and experiences before generating content.
Hierarchy of Impact
From highest to lowest impact on making writing sound human:
| Priority | Technique | Why |
More from doodledood/claude-code-plugins
scrollytelling
Implements scroll-driven storytelling experiences with pinned sections, progressive reveals, and scroll-linked animations. Use when asked to build scrollytelling, scroll-driven animations, parallax effects, narrative scroll experiences, or story-driven landing pages.
133research-web
Deep web research with parallel investigators, multi-wave exploration, and structured synthesis. Spawns multiple web-researcher agents to explore different facets of a topic simultaneously, launches additional waves when gaps are identified, then synthesizes findings. Use when asked to research, investigate, compare options, find best practices, or gather comprehensive information from the web.\n\nThoroughness: quick for factual lookups | medium for focused topics | thorough for comparisons/evaluations (waves continue while critical gaps remain) | very-thorough for comprehensive research (waves continue until satisficed). Auto-selects if not specified.
22decide
Personal decision advisor for QUALITY over speed. Exhaustive discovery, option finding, sequential elimination, structured analysis. Use for investments, purchases, career, life decisions. Surfaces hidden factors, tracks eliminations with reasons, confident recommendations. Triggers: help me decide, should I, which should I choose, compare options, what should I do, weighing options.
20optimize-prompt-token-efficiency
Iteratively optimizes prompts for token efficiency by maximizing information density - reduces verbosity, removes redundancy, tightens phrasing while preserving semantic content. Use when asked to compress, shorten, reduce tokens, tighten, maximize density, increase information density, or make a prompt more concise.
13prompt-engineering
Craft or update LLM prompts from first principles. Use when creating new prompts, updating existing ones, or reviewing prompt structure. Ensures prompts define WHAT and WHY, not HOW.
13define-brand-guidelines
Create a BRAND_GUIDELINES.md that defines how to communicate with your customer. Requires CUSTOMER.md to exist first. Covers voice, tone, language rules, messaging framework, and copy patterns.
13