learning-analytics-interpretation-guide

Installation
SKILL.md

Learning Analytics Interpretation Guide

What This Skill Does

Guides a teacher through interpreting a specific learning dataset — assessment results, engagement metrics, study behaviour patterns, or any other quantitative or qualitative data about student learning — to identify actionable patterns and inform specific teaching decisions. The critical insight from Wiliam (2011) and Mandinach & Gummer (2016) is that learning analytics is ONLY useful if it changes teacher decisions. A dashboard full of colourful graphs is worthless if the teacher doesn't know what to DO differently as a result. This skill bridges the gap between data and action: it takes raw data, identifies the patterns that matter, explains them in plain language, recommends specific teaching responses, and — critically — flags what the data does NOT show and the interpretive traps the teacher should avoid. AI is specifically valuable here because interpreting learning data requires simultaneously considering multiple variables (individual vs. group patterns, prior performance, assessment validity, possible confounds) — a cognitive task that is difficult for a teacher reviewing data at 8pm after a full teaching day, but straightforward for a well-designed AI system.

Evidence Foundation

Siemens & Long (2011) articulated the foundational vision for learning analytics: using data generated by learners to understand and optimise learning. They distinguished between academic analytics (institutional-level data for strategic decisions) and learning analytics (course/student-level data for teaching decisions). The key insight: the value of analytics is not in the data itself but in the decisions it enables. Bienkowski et al. (2012) produced a comprehensive US Department of Education report on educational data mining and learning analytics, reviewing the evidence base and identifying key applications: early warning systems (identifying at-risk students), adaptive learning (adjusting content to individual performance), and formative feedback (informing day-to-day teaching decisions). They found that the most effective applications were those that provided teachers with actionable information, not raw data dumps. Wiliam (2011) argued that data use in education should be fundamentally FORMATIVE — the purpose of collecting data is to adjust teaching, not to label students. He identified five key formative assessment strategies, all of which depend on effective data interpretation: clarifying learning intentions, engineering effective discussions, providing feedback that moves learners forward, activating students as instructional resources for one another, and activating students as owners of their own learning. Mandinach & Gummer (2016) studied teacher data literacy and found that most teachers lack training in data interpretation. The most common errors: confusing correlation with causation, over-interpreting small samples, ignoring measurement error, and focusing on averages while missing important subgroup patterns. They argued that data literacy is a core teaching competence that is rarely taught in initial teacher education. Wise (2014) found that simply giving students access to their own learning analytics did not improve learning — students needed structured guidance on how to interpret and act on the data. The same principle applies to teachers.

Input Schema

The teacher must provide:

  • Dataset description: What data is available. e.g. "Year 10 mock exam results: class of 28, scores range from 23% to 91%, mean 58%, median 54%. Question-level data available: Q1 (recall) 78% correct, Q2 (application) 45% correct, Q3 (analysis) 31% correct, Q4 (evaluation) 28% correct" / "Kaku engagement data for Year 7 maths: 24 students completed homework last week, 6 didn't. Of those who completed it, average time spent was 18 minutes. Three students spent over 40 minutes. Two students completed in under 5 minutes with 100% accuracy" / "Reading log data: 30 students, 3 weeks of daily reading minutes. Class average dropped from 22 min/day in week 1 to 14 min/day in week 3"
  • Decision context: What the teacher needs to decide. e.g. "I need to plan the next two weeks of revision lessons — what should I focus on?" / "I'm deciding whether to set more challenging homework or to go back and reteach the basics" / "I'm concerned about the reading decline and need to decide whether to intervene or whether this is normal variation"

Optional (injected by context engine if available):

  • Student level: Year group and proficiency
  • Subject area: The curriculum subject
Related skills
Installs
9
GitHub Stars
216
First Seen
Apr 2, 2026