criterion-referenced-rubric-generator

Installation
SKILL.md

Criterion-Referenced Rubric Generator

What This Skill Does

Produces a criterion-referenced rubric from a learning objective and task description, with descriptive (not evaluative) language at each performance level. Each criterion describes what the student's work LOOKS LIKE at each level — not how "good" it is. The output includes the full rubric, a design rationale, a student-friendly version for self/peer assessment, and calibration notes for consistency across markers. AI is specifically valuable here because effective rubric design requires precise, descriptive language that distinguishes between performance levels without using evaluative labels ("excellent," "good," "poor") or vague quantity indicators ("some," "many," "thorough") — and each descriptor must be qualitatively distinct from the adjacent levels, not just a scaled version of the same description.

Evidence Foundation

Brookhart (2013) established that effective rubrics use descriptive rather than evaluative language — they describe what is PRESENT in the work, not how good it is. "Uses specific textual evidence to support each analytical point" is descriptive; "Good use of evidence" is evaluative. Descriptive rubrics produce more reliable scoring and more useful feedback because they tell students exactly what to do differently, not just that they need to "do better." Andrade (2000, 2013) demonstrated that rubrics improve both instruction and learning when shared with students before the task — they function as learning tools, not just grading tools. The effect is strongest when rubrics are used for self-assessment. Jonsson & Svingby (2007) found that analytic rubrics (separate criteria scored independently) are more reliable and produce better feedback than holistic rubrics (single overall judgment), though they take longer to use. Sadler (1989) established that assessment quality depends on the "gap" being visible — students must be able to see the difference between where they are and where they need to be. Descriptive rubric levels make this gap concrete. Panadero & Jonsson (2013) confirmed that rubric use improves student performance, particularly when combined with self-assessment, with moderate effect sizes.

Input Schema

The teacher must provide:

  • Learning objective: What the rubric assesses. e.g. "Students can write a persuasive speech that uses rhetorical devices to influence the audience" / "Students can design and carry out a fair test and draw valid conclusions"
  • Task description: The specific task. e.g. "Write and deliver a 3-minute persuasive speech on a topic of your choice" / "Plan and carry out an experiment investigating the effect of light on plant growth, then write a conclusion"
  • Student level: Year group. e.g. "Year 8"

Optional (injected by context engine if available):

  • Criteria count: Number of criteria (default: 4)
Related skills
Installs
10
GitHub Stars
216
First Seen
Apr 2, 2026