1) Abolish whole-school data reporting on anything less than an annual basis.
Whether from teachers to school leaders, from school to parent, you name it, get rid of it. It is only on an annual basis that we can conduct assessments thorough enough to provide valid inferences about students’ overall understanding of a subject, and where we can pursue reliably through moderation. (If you doubt the former point, ask language teachers trying to mark assessments of speaking, reading, writing and listening on a termly basis). Valid and reliable data would provide excellent justification for significant interventions for students. This is necessary to create the time to pursue point 2 properly:
2) Devolve regular assessment to teachers and departments
I’m not arguing for less assessment, I’m arguing for frequent, useful assessment. Levels do not help a teacher or a head of department (if they did, it would be possible to explain how to support a student at ‘level 5c’ in history without any further information).
What teachers need to assess on a regular basis is students’ knowledge of individual concepts and ideas, and their capacity to use that knowledge; this kind of analysis must happen at a departmental level. Question-level analysis in departments provides usable insights: if 80% of students in Class A answered a question about the Blitz well, but only 40% in Class B did so, it seems highly likely that the teacher of Class A has done something her colleague would benefit from learning about. Departments can create short-term solutions (Teacher Y spends a few minutes reteaching the Blitz using Teacher X’s approach) alongside longer-term ones (reconsidering the unit plans). ‘Intervention’ would be more frequent, and more useful than on a whole-school level.
This is almost precisely what we do at Michaela. We examine twice a year rather than annually, but bar that, Harry pretty much just summarised our assessment policy in two paragraphs.
Multiple choice quizzes
Pupils take weekly multiple-choice quizzes online. These are automatically marked with question led analysis that looks like this.
In fortnightly line management meetings, we discuss the implications of this data. How is the class doing overall? Do I need to reteach a topic? Stretch the class further? How are individual pupils doing? What can I do to support those who are struggling? How have the historically lowest performing pupils done recently? What’s worked well and what hasn’t? Why?
Teachers can also set pure recap quizzes, testing content from throughout the year, by picking and mixing old questions.
You can tag questions; set time limits; set maximum number of retakes; upload from Word or Excel; embed TeX, images, videos…
One can choose between free-text entry, single answer multiple choice, or multiple correct multiple choice. The latter can really add rigour, as can carefully planned distractors. Here’s a random assortment I just screenshotted:
The other component of the assessment system is the biannual exams. These have two components: a multiple choice element, and a traditional written exam. The multiple choice quiz tests a random selection of knowledge from the entire domain – all the content they have studied since beginning at Michaela. The written papers are synoptic wherever possible, drawing knowledge we expect them to have retained from previous terms and years. We’re not in the business of tacit acceptance that you’ll forget everything from Year x when you’re in Year x + 1.
Pupils are given a percentage score for their performance in both parts separately. Internal and external moderation calibrates standards for percentages. We do not touch levels! Levels are not part of the vocabulary at Michaela. This is not a ladder system where you go up if you’re improving: pupils who stay on the same percentage over their school career are making expected progress. A pupil getting 80% in Year 11 is doing so on a harder, broader range of knowledge than the test they got 80% on in Year 7.
How it’s going
This assessment system is what I was trying to shoe-horn in as an individual teacher behind my classroom door in my old school, except without the shoe-horning.
Last year, I was attaching meaningless levels to half-termly assessments, spending an age marking and doing data entry for junk data that would sit on SIMS and not tell anyone anything actionable. In my classroom, I would play about with apps like QuickKey and use websites like diagnosticquestions.com to get a useful picture of my classes’ learning. I was a lone ranger. In terms of a Venn diagram, we didn’t have any overlap:
Now data has become worth looking at. What I was hiding behind closed doors is the whole-school policy. The workload of meaningless marking and data entry has been felled. Our Venn diagram becomes one lovely overlapping circle:
- We had a few teething issues with the technology, though nothing major.
- It’s different. Our system is not challenging to understand, but if you’re looking for levels and a nice linear progression with numbers going up, you’ll have to try a little harder. I won’t apologise for that. If you’re not data literate enough to understand that something staying at 80% can be progress, then you should not have any role in judging my performance or my school’s performance.
- It’s spoiled me for any other school: I couldn’t go back to an assessment system that didn’t make sense.
I see so many people blogging about the right ideas on assessment. I see so many people doing the right thing behind their classroom door. School leaders now need to capitalise on that and make whole school systems that make sense. Drop the junk data, work with your teachers. Increase the Venn diagram overlap.