ai-expertise-interrogation-designer
AI Expertise Interrogation Designer
What This Skill Does
Generates an activity — Kharbach's (2026) "Funhouse Mirror" — in which students use their own domain expertise as the instrument for detecting AI distortions, omissions, and overconfidence. This reverses the usual expert-novice dynamic in AI evaluation: in most AI literacy activities, students are novices being warned about AI's limitations. In this activity, students ARE the domain experts. They ask AI about something they know genuinely well — a subject they have studied deeply, a sport they play competitively, a cultural context they grew up in, a local geography they know intimately — and they use their knowledge to identify what the AI gets wrong, oversimplifies, flattens, or presents with false confidence. The pedagogical mechanism is calibrated AI skepticism through direct confrontation: finding an AI error in your domain of expertise creates a visceral, durable understanding that AI is not omniscient. A student who has found AI confidently wrong about their sport, their cultural tradition, or their hometown has earned a much more reliable skepticism than one who has been warned abstractly that "AI can make mistakes." The output includes an expertise activation protocol (students document their own knowledge before consulting AI), calibrated interrogation questions, a distortion annotation protocol, a distortion taxonomy for the specific domain, and a discussion guide for synthesising findings across the class's varied expertise areas.
Evidence Foundation
Chi, Glaser & Farr (1988) established the expert-novice framework: experts organise knowledge differently from novices, perceive problems at a deeper level, and have richer, more interconnected representations of their domain. Crucially, experts notice what is absent or distorted in a domain representation — they have the knowledge to see gaps. This is the foundation of the activity: students with genuine domain expertise can see AI distortions that a general student would miss, because only someone who knows what should be there can notice what isn't. Ericsson & Smith (1991) established that expertise is characterised by the ability to notice subtle distinctions and deviations from expected patterns — the same cognitive mechanism activated when an expert reads AI output about their field. Thiede et al. (2003) demonstrated that metacognitive accuracy (the correlation between judged and actual understanding) improves substantially when learners have a genuine knowledge base to compare against — finding a discrepancy between what you know and what you've read activates accurate self-assessment. The activity creates exactly this: a comparison between the student's domain knowledge and the AI's output. Dunning et al. (2003) on the Dunning-Kruger effect is relevant in the inverse direction: students who know a domain well are BETTER at detecting AI limitations in that domain than students who know it poorly — expertise enables the recognition of AI's incompetence, not just one's own. Kazemitabaar et al. (2023) provided empirical evidence that AI assistance in learning contexts can conceal genuine knowledge gaps; this activity is designed to expose that dynamic by reversing the expert/novice positioning.
Input Schema
The teacher must provide:
- Student expertise domain: What students genuinely know well. e.g. "Football (specifically: Manchester United history and Premier League statistics)" / "Hungarian folk music and dance traditions (local knowledge)" / "Competitive swimming — technique, training, competitions" / "K-Pop culture, specifically BTS discography and fandom" / "The geography and history of Budapest's 7th district"
- Student level: Year group and depth of expertise. e.g. "Year 10, students have strong individual expertise in varied domains (sport, music, geography, cultural traditions)" / "Year 12, students have studied History to A-level depth"
Optional (injected by context engine if available):
- Interrogation depth: Surface vs. deep errors
- Discussion format: How findings will be shared
More from garethmanning/claude-education-skills
intelligent-tutoring-dialogue-designer
Script a multi-turn tutoring dialogue with branching responses for anticipated student difficulties. Use when designing AI tutors, chatbot interactions, or structured one-to-one support scripts.
15scaffolded-task-modifier
Modify a classroom task with language scaffolds that preserve cognitive demand for EAL learners. Use when adapting existing tasks for students at different English proficiency levels.
14experiential-learning-cycle-designer
Structure a direct experience into a full learning cycle with concrete experience, reflection, and conceptual transfer. Use when planning field trips, simulations, or practical tasks.
14gap-analysis-from-student-work
Analyse student work against criteria to identify specific gaps between current performance and learning objectives. Use when reviewing submissions, planning feedback, or diagnosing learning needs.
13backwards-design-unit-planner
Plan a unit using backwards design from desired outcomes through assessment evidence to learning activities. Use when starting a new unit or redesigning an existing one from standards.
13dual-coding-designer
Design a visual complement to verbal content using dual coding principles for stronger encoding. Use when creating slides, diagrams, posters, or visual explanations of complex concepts.
12