prompt-literacy-sequence-designer

Installation
SKILL.md

Prompt Literacy Sequence Designer

What This Skill Does

Generates a structured learning sequence that teaches students why prompt quality determines AI output quality — and what specific prompt moves produce more useful, accurate, and contextually appropriate responses. The sequence follows a compare-contrast structure: students run vague and refined prompts on the same question, analyse the difference in output quality, and abstract the principles. The core insight is that AI fills missing context with the most statistically common response — so a prompt with no context about audience, purpose, discipline, or constraints will receive an answer calibrated for the average case, not the student's specific situation. The Pricing Exercise (Kharbach, 2026) is included as the anchor activity: students take a context-free AI answer ("What should I charge for a service?") and iteratively add constraints (type of service, location, target market, quality level), showing in real time how specificity transforms output from generically unhelpful to genuinely useful. The sequence teaches five prompt dimensions: context (who am I, what am I doing?), task (exactly what do I want?), constraints (what limits apply?), format (how should the output be structured?), and persona (what role should the AI take?). Prompt literacy is a prerequisite for effective AI use and a direct complement to AI output evaluation skills.

Evidence Foundation

Brown et al. (2020) in the GPT-3 paper demonstrated empirically that the way a prompt is formulated dramatically affects model output quality — few-shot examples in the prompt (showing the AI what a good response looks like) produce substantially better results than zero-shot prompts (no examples). This is the foundational evidence that prompt design is not arbitrary. Liu et al. (2023) conducted a systematic survey of prompting methods, documenting the research on how different prompt structures (chain-of-thought, role-play, instruction-following, few-shot) affect output quality across tasks. Their survey establishes that prompt engineering is a skill with learnable principles, not a matter of chance. Reynolds & McDonell (2021) extended this to the concept of "metaprompts" — prompts that explicitly instruct the AI about how to reason, structure its response, or adopt a persona — showing that these structural elements can substantially improve output quality. These three sources provide the AI-specific evidence base for prompt literacy instruction. However, the pedagogical evidence base for teaching prompt literacy to students is currently very limited — this is frontier territory in educational research. The remaining sources support the instructional design of this sequence: Rosenshine (2012) provides the modelling → guided practice → independent practice structure used here; Willingham (2007) provides the domain-specificity argument (what counts as a good prompt in history is different from what counts as one in mathematics) that justifies subject-specific prompt literacy instruction.

Input Schema

The teacher must provide:

  • Subject area: The discipline. e.g. "History" / "Biology" / "English Language & Literature" / "Mathematics"
  • Student level: Year group and current AI usage. e.g. "Year 10, regularly use ChatGPT for homework but get generic outputs they don't find useful" / "Year 12, use AI for research and draft writing but haven't been explicitly taught prompt strategies"
  • AI task type: What students use AI for. e.g. "Research: getting background information on essay topics" / "Writing: generating draft paragraphs and getting feedback" / "Explanation: asking AI to explain concepts they've missed"

Optional (injected by context engine if available):

  • Prompt literacy focus: Which prompt dimension to emphasise
Related skills
Installs
2
GitHub Stars
227
First Seen
1 day ago