intelligent-tutoring-dialogue-designer

Installation
SKILL.md

Intelligent Tutoring Dialogue Designer

What This Skill Does

Designs the dialogue logic for an AI tutoring interaction — when to ask a question, when to give a hint, when to explain, when to prompt for self-explanation, and when to stay silent. This is the hardest design problem in intelligent tutoring: too much intervention prevents productive struggle and creates dependency; too little leaves students stuck and frustrated. VanLehn (2011) showed that the effectiveness of tutoring (human or AI) depends on the quality of the step-level interaction — systems that engage students in active reasoning at each step dramatically outperform systems that simply present content and evaluate final answers. Chi et al. (2001) analysed what makes human tutoring effective and found, counter-intuitively, that the most effective tutors were NOT the ones who explained the most — they were the ones who asked the right questions and created opportunities for students to self-explain. The output includes a complete dialogue architecture (the phases and branching logic of the interaction), a library of dialogue moves (the specific things the tutor can say or do), decision rules (when to use each move based on student responses), and a worked example showing the system in action. AI is specifically valuable here because it can sustain one-to-one dialogue at scale — but the dialogue must be deliberately designed or the AI will default to lecturing, which is the least effective tutoring strategy.

Evidence Foundation

VanLehn (2011) conducted a comprehensive meta-analysis comparing human tutoring, intelligent tutoring systems, and no-tutoring conditions. He found that the critical factor was not WHO was tutoring but HOW. "Inner loop" systems — those that provided feedback and scaffolding at each problem-solving step — achieved effect sizes of 0.76, nearly matching human tutors (0.79). "Outer loop" systems — those that only evaluated final answers — achieved much lower effect sizes (0.31). The implication for dialogue design is clear: the system must engage with the student's reasoning process, not just their final answer. Chi et al. (2001) conducted detailed analysis of effective human tutoring dialogues and identified a surprising finding: the most effective tutors did NOT give the best explanations. Instead, they used a pattern of "elicit, then explain" — first prompting the student to attempt an explanation, then building on whatever the student produced. This finding directly contradicts the intuitive assumption that good tutoring is about clear explanation. The reason: when a tutor explains, the student passively receives. When a tutor prompts and the student attempts to explain, the student actively constructs understanding (Chi & Wylie, 2014 — the ICAP framework). Graesser et al. (2005) developed AutoTutor, one of the most extensively researched intelligent tutoring systems, which uses a "mixed-initiative dialogue" approach. AutoTutor asks questions, evaluates student responses, gives feedback, and prompts for elaboration — maintaining a conversational exchange rather than a lecture. Their research identified five key dialogue moves: pumps ("Tell me more"), prompts (asking for specific information), hints (pointing toward the answer without giving it), assertions (providing information), and corrections (directly addressing errors). Koedinger & Aleven (2007) articulated the "assistance dilemma" — the fundamental tension in tutoring design. Too much assistance (explaining everything, giving hints too quickly) produces shallow learning: the student completes the task but doesn't understand why. Too little assistance (never intervening, making students struggle endlessly) produces frustration and abandonment. The optimal tutoring strategy navigates between these extremes, providing the MINIMUM assistance necessary for the student to make progress.

Input Schema

The teacher must provide:

  • Learning objective: What the student should master. e.g. "Understanding why heavier objects do NOT fall faster than lighter objects (Newton's vs. Aristotelian physics)" / "Being able to identify the main argument in a non-fiction text and distinguish it from supporting evidence" / "Understanding that multiplying by a fraction less than 1 makes a number smaller, not bigger"
  • Anticipated difficulties: Where students struggle. e.g. "Students have the Aristotelian intuition that heavier = faster. They cite everyday experience (dropping a feather vs. a ball) as 'proof.' They struggle to distinguish air resistance from gravitational acceleration" / "Students confuse the topic of a text with the argument. They identify facts rather than claims. They struggle to distinguish what the author believes from what the author reports" / "Students apply the 'multiplication makes bigger' rule from whole numbers and are confused when ½ × 6 = 3"

Optional (injected by context engine if available):

  • Student level: Year group and proficiency
  • Subject area: The curriculum subject
Related skills
Installs
3
GitHub Stars
227
First Seen
1 day ago