Schools – particularly senior managers – are obsessed by pupil progress, not least because Ofsted inspectors are also obsessed by pupil progress. How many times have you heard ‘an outstanding lesson is one in which outstanding progress is made’? All of this rests on the premise that pupil progress can be modelled in a way that allows us to measure how far up a progression ladder a pupil has moved. I am not convinced that this is possible.
I should be clear that here I am making a distinction between a mark scheme and a progression model. A mark scheme is used to assess a particular piece of work (like an exam question) where – although there are still plenty of grey areas – we might still nonetheless rank work against a set of criteria or against the work of other students.
A progression model, in contrast, is more like the old National Curriculum levels. Here, the criteria are not task-specific. Rather, the levels set out a linear model by which pupils were expected to progress. If you have used phrases such as “Well, he was a Level 4a last year, but this year he is working at Level 4c” then you are placing a child on a progression model where, it is hoped, that the child will move up that ladder from one year to the next.
Let’s consider for a moment how tests tend to work in schools. As pupils work their way through a curriculum, they meet new areas of knowledge. In history, I might teach medieval Britain in Year 7 and then early modern Europe in Year 8. Perhaps in biology pupils work on plants in one term and vertebrates in the next. I think it is fairly normal for pupils to be assessed on each new area they cover, as in Table 1.
There is, however, a fundamental problem with this model. Let’s say a pupil scores a Level 4 on medieval Britain and a Level 5 on early-modern Europe. Has that pupil made progress? Well, the answer is we do not know. In order to measure whether someone has got better at something, the thing they are getting better at has to remain the same. Knowledge of medieval Britain is not the same as knowledge of early-modern Europe, and therefore we cannot state that a pupil has made progress if they got a Level 4 for the former and a Level 5 for the latter. Indeed, a pupil might get a Level 4 on medieval Britain and then a Level 3 on early-modern Europe and still be making progress, as they have gained knowledge of a new curriculum area (early-modern Europe). In order for us to have a progression model like National Curriculum levels, the yardstick against which pupils are measured has to remain constant. If the yardstick shifts, then we cannot state whether or not a pupil is progressing up that progression model. In short, the whole basis of the model in Table 1 is flawed as the thing that pupils are supposed to get better at changes.
This immediately means that a progression model such as National Curriculum levels is a priori going to fail. National Curriculum levels could have worked if the curriculum area remained constant throughout all of school. If all I ever taught was medieval British history (and the same bits of medieval British history) then it could work because there is a constant body of knowledge against which pupils can be assessed, as in Table 2.
Here, because the curriculum area being assessed remains constant, it is possible to model progression as we would expect that a pupil does better on each successive test. The problem with this is that this is not how a school curriculum works. Over time pupils move on to new curriculum areas, meaning that the yardstick against which pupils are being measured does not remain constant.
So what about an alternative model? Table 3 works on the basis that each successive test assesses pupil knowledge of everything they have done so far. So, continuing my example, the Year 7 test might assess knowledge of medieval British history, and then the Year 8 test would assess knowledge of medieval British history and early-modern European history.
This means that a test in Year 8 ought to be drawing on work done in Year 7, while a test in Year 9 should be assessing everything done in Key Stage 3. Indeed, it would ideally be the case that tests in Year 7 (and 8 and 9) assess work done in primary school. In some subjects (say maths or languages) I think this is more common currently (children continue to use addition or the simple present tense), but in many subjects (sciences, history, geography, literature) a unit-by-unit approach is used that does not assess prior knowledge.
So what might this look like in practice? As a Deputy Heads with responsibility for data, what ought we to expect results to look like using this kind of model?
In Table 4, Pupil 1’s results remain consistent. We are now so tied to the idea of numbers on a graph going up that we might look at this and think ‘well, Pupil 1 is not making progress – the graph would be a straight line’.
The thing is, Pupil 1 is making progress – perhaps exceptional progress. As each test assesses an increasingly large body of knowledge, by getting 80% in Test 5 this pupil has demonstrated mastery over a much larger body of knowledge than she did in Test 1. This is, incidentally, something we are much happier with in music – someone who consistently gets ‘Merit’ in a graded exam is understood as making strong progress.
Most probably you are at the moment thinking about how to shape your assessment structure in a post-levels world. My advice, for what it’s worth, is not to reinvent a square wheel. Levels did not work, not because they were poorly implemented, but rather because the very progression model on which they were based was fundamentally flawed. A priori, levels were never going to work. If you adopt a model similar to the one in Table 3, then you do have a way of measuring progression as each test assesses everything that has been learnt before. This will tell you a lot more about just how much progress a pupil has actually made. It will, however, require you to ditch the idea that progress can be represented by a neat line on a graph: it really can’t.