ai-expertise-interrogation-designer

Installation
SKILL.md

AI Expertise Interrogation Designer

What This Skill Does

Generates an activity — Kharbach's (2026) "Funhouse Mirror" — in which students use their own domain expertise as the instrument for detecting AI distortions, omissions, and overconfidence. This reverses the usual expert-novice dynamic in AI evaluation: in most AI literacy activities, students are novices being warned about AI's limitations. In this activity, students ARE the domain experts. They ask AI about something they know genuinely well — a subject they have studied deeply, a sport they play competitively, a cultural context they grew up in, a local geography they know intimately — and they use their knowledge to identify what the AI gets wrong, oversimplifies, flattens, or presents with false confidence. The pedagogical mechanism is calibrated AI skepticism through direct confrontation: finding an AI error in your domain of expertise creates a visceral, durable understanding that AI is not omniscient. A student who has found AI confidently wrong about their sport, their cultural tradition, or their hometown has earned a much more reliable skepticism than one who has been warned abstractly that "AI can make mistakes." The output includes an expertise activation protocol (students document their own knowledge before consulting AI), calibrated interrogation questions, a distortion annotation protocol, a distortion taxonomy for the specific domain, and a discussion guide for synthesising findings across the class's varied expertise areas.

Evidence Foundation

Chi, Glaser & Farr (1988) established the expert-novice framework: experts organise knowledge differently from novices, perceive problems at a deeper level, and have richer, more interconnected representations of their domain. Crucially, experts notice what is absent or distorted in a domain representation — they have the knowledge to see gaps. This is the foundation of the activity: students with genuine domain expertise can see AI distortions that a general student would miss, because only someone who knows what should be there can notice what isn't. Ericsson & Smith (1991) established that expertise is characterised by the ability to notice subtle distinctions and deviations from expected patterns — the same cognitive mechanism activated when an expert reads AI output about their field. Thiede et al. (2003) demonstrated that metacognitive accuracy (the correlation between judged and actual understanding) improves substantially when learners have a genuine knowledge base to compare against — finding a discrepancy between what you know and what you've read activates accurate self-assessment. The activity creates exactly this: a comparison between the student's domain knowledge and the AI's output. Dunning et al. (2003) on the Dunning-Kruger effect is relevant in the inverse direction: students who know a domain well are BETTER at detecting AI limitations in that domain than students who know it poorly — expertise enables the recognition of AI's incompetence, not just one's own. Kazemitabaar et al. (2023) provided empirical evidence that AI assistance in learning contexts can conceal genuine knowledge gaps; this activity is designed to expose that dynamic by reversing the expert/novice positioning.

Input Schema

The teacher must provide:

  • Student expertise domain: What students genuinely know well. e.g. "Football (specifically: Manchester United history and Premier League statistics)" / "Hungarian folk music and dance traditions (local knowledge)" / "Competitive swimming — technique, training, competitions" / "K-Pop culture, specifically BTS discography and fandom" / "The geography and history of Budapest's 7th district"
  • Student level: Year group and depth of expertise. e.g. "Year 10, students have strong individual expertise in varied domains (sport, music, geography, cultural traditions)" / "Year 12, students have studied History to A-level depth"

Optional (injected by context engine if available):

  • Interrogation depth: Surface vs. deep errors
  • Discussion format: How findings will be shared
Related skills
Installs
2
GitHub Stars
216
First Seen
6 days ago