ai-learning-boundary-mapper

Installation
SKILL.md

AI Learning Boundary Mapper

What This Skill Does

Generates a component-by-component analysis of a specific assignment, mapping which elements benefit from AI assistance, which are neutral, and which are undermined by AI involvement — based on the learning objectives the assignment serves. This is the teacher-facing design tool for AI-age assignment redesign: it takes an existing assignment and produces a boundary map that allows teachers to set specific, defensible AI use policies rather than blanket "AI allowed" or "no AI" positions. The central insight is that within any single assignment, different components serve different learning objectives — and AI assistance that helps with one component may undermine another. An essay that requires both research (AI can assist with summarising context) and original argumentation (AI assistance bypasses the cognitive work of constructing an argument) benefits from a component-level policy, not a uniform one. The output includes an objective analysis (for each learning objective, whether AI assistance supports or undermines it), a component boundary map, defensible AI policy recommendations, an optional Google vs. AI chatbot tool comparison for information-gathering tasks, and redesign suggestions that preserve learning-critical challenge while permitting AI use where it genuinely helps. This skill is the teacher-design complement to metacognitive-monitoring-ai-contexts: boundary-mapping prevents the metacognitive risk from arising; metacognitive-monitoring-ai-contexts addresses it when it does.

Evidence Foundation

Wiggins & McTighe (2005) established the backward design principle: assessment design should start with learning objectives (Stage 1) and work backward through evidence of learning (Stage 2) to learning activities (Stage 3). This principle applies directly to AI boundary-setting: the question is not "should AI be used in this assignment?" but "which learning objectives does this assignment serve, and does AI assistance support or bypass the cognitive work those objectives require?" Bjork et al. (2013) documented illusions of competence — conditions where learners feel they have learned more than they actually have. AI assistance produces the fluency illusion: tasks completed with AI assistance feel complete and correct, but the cognitive work that generates durable learning has been bypassed. The boundary map is designed to identify which assignment components are most vulnerable to this effect. Kazemitabaar et al. (2023) provided direct empirical evidence: AI-assisted programming students completed tasks faster and with fewer errors but showed weaker understanding on subsequent tasks without AI support. This effect is used here as the model for identifying "AI-undermining" components — any task where the cognitive process (not just the product) is the learning objective. Kirschner, Sweller & Clark (2006) established that minimally guided instruction produces weaker learning than explicit instruction for novices, because novice learners need the cognitive challenge of the task itself to build the knowledge structures required for expertise. This supports identifying components where removing cognitive challenge (via AI) also removes learning. Wineburg & McGrew (2019) provide indirect support for the tool-comparison dimension: different information tools have different epistemic properties (verifiable citations vs. synthesised inference), and students benefit from explicit guidance about which tool to use for which information need.

Input Schema

The teacher must provide:

  • Assignment description: What students do. e.g. "Year 10 History: write a 600-word essay arguing whether the Treaty of Versailles was the main cause of WWII, using at least three named historians' arguments" / "Year 9 Science: write a lab report for the rates of reaction experiment — method, results, analysis, conclusion" / "Year 12 English: comparative essay on two texts studied in class"
  • Learning objectives: What the assignment develops. e.g. "Students learn to construct an evidence-based historical argument, evaluate competing historiographical interpretations, and use source evidence appropriately" / "Students learn to write scientific analysis from data they collected themselves, drawing valid conclusions"

Optional (injected by context engine if available):

  • Current AI policy: What's currently permitted
  • Student level: Year group
Related skills
Installs
2
GitHub Stars
216
First Seen
6 days ago