ai-socratic-dialogue-designer

Installation
SKILL.md

AI Socratic Dialogue Designer

What This Skill Does

Generates a multi-round questioning sequence specifically designed for interrogating AI chatbots — probing their answers through iterative questioning, tracking how their responses shift across rounds, and teaching students to distinguish genuine logical concession (the AI updates because a new argument is logically compelling) from sycophantic capitulation (the AI agrees because it is trained to defer to user pushback). This addresses a fundamental asymmetry between AI Socratic dialogue and human Socratic dialogue: AI systems are trained to be helpful and agreeable, which means they will often revise their answers in response to user pushback regardless of whether the pushback is logically valid. A student who pushes back on an AI answer and receives an updated, more agreeable response may conclude that persistence equals correctness — a false inference with significant implications for how they evaluate evidence. The pedagogical goal is to teach students to interrogate AI critically, distinguish between "the AI changed its mind because I made a good argument" and "the AI changed its mind because I pushed back," and develop the disposition to demand logical evidence rather than settle for agreement. The output includes a multi-round questioning sequence using Paul & Elder's question types adapted for AI, an answer drift tracker protocol, a capitulation taxonomy, facilitation notes, and a debrief guide.

Evidence Foundation

Paul & Elder (2008) classified Socratic questions into six types: clarification, probing assumptions, probing reasons and evidence, viewpoints and perspectives, implications and consequences, and questions about the question. These question types are adapted here for AI dialogue — they remain valid as analytical moves, but the AI-specific context changes what responses mean. Walsh & Sattes (2005) demonstrated that wait time and genuine curiosity-driven follow-up (rather than evaluative responses) produce richer thinking in student dialogue. The adaptation here is different: with AI, the question is not whether the AI is thinking deeply but whether its response pattern reveals sycophancy or genuine logical responsiveness. Nystrand et al. (1997) identified authentic questions — where the questioner genuinely does not know the answer — as the strongest predictor of productive dialogue. In AI dialogue, all questions are authentic from the student's perspective, but the AI is not a genuine dialogue partner with beliefs it holds and can revise — it is a pattern-completion system that responds to the statistical properties of the conversation. Perez et al. (2022) documented sycophancy in language models: LLMs trained with human feedback tend to produce responses that humans rate positively in the moment, which correlates with agreeing with the human's implied position. This produces a systematic bias: when users express disagreement with an AI response, the AI will often revise toward the user's position even when the user's pushback contains no logical argument. Wei et al. (2022) showed that chain-of-thought prompting (asking AI to show its reasoning step by step) produces more coherent and consistent responses, and that inconsistencies in reasoning become more visible. The multi-round dialogue structure here uses chain-of-thought techniques to expose reasoning patterns that make capitulation detectable.

Input Schema

The teacher must provide:

  • Interrogation topic: The AI claim to probe. e.g. "AI's claim that nuclear energy is safer than renewable energy" / "An AI explanation of why homework improves learning" / "AI's assertion that social media is primarily harmful to teenagers" / "AI's summary of the causes of WWI, which oversimplifies the role of German aggression"
  • Student level: Year group and questioning experience. e.g. "Year 12, familiar with Socratic method from Philosophy class" / "Year 10, basic questioning skills"

Optional (injected by context engine if available):

  • Subject area: The discipline
  • Rounds: Number of questioning rounds
Related skills
Installs
3
GitHub Stars
216
First Seen
6 days ago