disciplinary-ai-literacy-sequence-designer
Disciplinary AI Literacy Sequence Designer
What This Skill Does
Generates a multi-lesson sequence in which students systematically compare AI's handling of the same type of question across disciplines — developing a principled mental model of where AI is reliable and where it distorts, based on the type of knowledge the discipline produces. The central insight is that AI's reliability is not uniform: it handles settled factual knowledge differently from contested interpretive claims, sequential procedural knowledge differently from dispositional knowledge about values and judgment. A student who understands why AI is generally reliable when explaining photosynthesis but unreliable when interpreting the causes of the French Revolution has developed transferable AI literacy — not just a list of "AI gets wrong" examples, but a predictive framework. The sequence follows the logic of Maton's (2013) semantic wave: starting from concrete discipline-specific examples, building to an abstract framework (AI reliability varies by knowledge type), then returning to concrete predictions ("for this assignment, in this subject, I expect AI to be X reliable because..."). The skill draws on the library's existing knowledge architecture framework: sequential knowledge (structured, hierarchical, cumulative — as in mathematics or scientific procedures) tends to be well-served by AI; horizontal knowledge (multiple valid frameworks that don't displace each other — as in historiography or literary criticism) is where AI is most likely to flatten genuine complexity or false-certainty a contested position.
Evidence Foundation
Willingham (2007) demonstrated that critical thinking is domain-specific — students who think critically in history do not automatically transfer that skill to biology, because what counts as good evidence differs by discipline. This has a direct AI corollary: AI's limitations in one domain do not automatically transfer to predictions about other domains. Disciplinary AI literacy requires domain-by-domain evaluation. McPeck (1981) argued that critical thinking is constituted by disciplinary knowledge — you cannot evaluate AI output in history without understanding how historians argue, any more than you can evaluate scientific AI output without understanding scientific reasoning. The sequence here operationalises this: students use their disciplinary knowledge as the evaluative instrument. Bernstein's (1999) distinction between vertical discourse (hierarchical, cumulative knowledge structures where new knowledge subsumes or displaces old — as in natural sciences) and horizontal discourse (segmented knowledge structures where competing frameworks co-exist — as in social sciences and humanities) directly predicts AI reliability patterns. AI is trained on the full distribution of human knowledge production — in vertical discourse domains, that distribution converges on correct answers; in horizontal discourse domains, it averages across competing frameworks, potentially flattening the distinctions between them. Maton's (2013) semantic wave concept provides the instructional design logic: the sequence must move students between concrete examples (AI answers about specific topics in specific disciplines) and abstract principles (the knowledge-type framework that explains the pattern), then back to concrete predictions. Wineburg (2007) on historical thinking as "unnatural" provides a specific example: the skills experts use in a discipline are not intuitive — students must be taught them. The same applies to disciplinary AI literacy: students must be explicitly taught to ask "what kind of knowledge is this discipline producing, and what does that mean for AI's reliability?"
Input Schema
The teacher must provide:
- Target disciplines: The subjects to compare. e.g. "Biology and History" / "Mathematics, History, and Ethics" / "Physics and Literary Studies" / "Geography and Philosophy"
- Student level: Year group. e.g. "Year 11, studying multiple subjects for national examinations" / "Year 12, A-level students"
Optional (injected by context engine if available):
- Anchor question type: The type of question to translate across disciplines
- Knowledge type focus: Which knowledge structure distinction to emphasise
More from garethmanning/claude-education-skills
intelligent-tutoring-dialogue-designer
Script a multi-turn tutoring dialogue with branching responses for anticipated student difficulties. Use when designing AI tutors, chatbot interactions, or structured one-to-one support scripts.
15scaffolded-task-modifier
Modify a classroom task with language scaffolds that preserve cognitive demand for EAL learners. Use when adapting existing tasks for students at different English proficiency levels.
14experiential-learning-cycle-designer
Structure a direct experience into a full learning cycle with concrete experience, reflection, and conceptual transfer. Use when planning field trips, simulations, or practical tasks.
14gap-analysis-from-student-work
Analyse student work against criteria to identify specific gaps between current performance and learning objectives. Use when reviewing submissions, planning feedback, or diagnosing learning needs.
13backwards-design-unit-planner
Plan a unit using backwards design from desired outcomes through assessment evidence to learning activities. Use when starting a new unit or redesigning an existing one from standards.
13dual-coding-designer
Design a visual complement to verbal content using dual coding principles for stronger encoding. Use when creating slides, diagrams, posters, or visual explanations of complex concepts.
12