The term ‘the curriculum is the progression model’ is increasingly thrown around today, not least because it has become part of the language of the school inspectorate. I find the language and concepts of education deeply fascinating at the best of times, but this one has a particular resonance for me as, to the best of my knowledge, it was a term first used by me and Christine Counsell. Yet I have on several recent occasions found myself discussing it with people and realising that the way it has been understood is some distance from how Christine and I originally used the term. I want to use this blog post to set out what I understood the term to mean when I first began to use it.
Some time around 2014 I was sat with Christine discussing the end of National Curriculum Levels. For around two decades, the National Curriculum Levels had become the progression model used from Key Stage 1 to Key Stage 3, and their use was ubiquitous in schools. It is easy to forget that we are now six years on from their aboliti on: the sad retention stats in education mean that there are large numbers of teachers who have trained, worked as teachers and left teaching without ever experiencing the National Curriculum Levels. I shall spare readers here a long account of why the levels were horribly flawed, but if this interests you, or you would like a reminder, then I wrote about this extensively in the early days of this blog.
Christine and I were despondently looking at all the different assessment models that were popping up in schools to replace levels, finding that in almost all cases schools were either re-inventing levels (usually in a worse form) or moving to using GCSE marking criteria to assess pupils at Key Stage 3. Years of analysis from the history education community on issues with assessment had given us a head-start on the problem: we were fairly confident that the myriad of weak assessment models emerging in schools were primarily a manifestation of weak progression models underpinning those assessments.
If you’ll bear with me, I’ll explain the line of thinking.
For many years, the way of modelling progression in schools had been a version of “how do I move from Summative Grade X to Summative Grade Y?” Those who were teaching in the years of National Curriculum levels will have lived through the hell of pupils writing targets of the form “I need to move from a Level 5 to a Level 6” or “I need to move from a Grade C to a Grade B” or something to that effect. To get better at something meant to move from one summative grade to the next: plenty of schools had classrooms with assessment ‘ladders’ on the wall or in pupil exercise books, usually showing what the criteria were for the next level up.
This understanding of progression crashed into the well-meaning but deeply-flawed interpretation of ‘Assessment for Learning’ as “teach pupils what they need to reach the next level”. Because progression was defined as fulfilling the grade requirements of the next level up, Assessment for Learning frequently took the form “to get a Level 4c you must…” It was not uncommon for this to be integrated into lesson objectives and outcomes (and no, I never worked out what the difference was either).
Now there are good reasons for why most of the assessments being used in schools were (and still are) neither valid nor reliable (and if you have not yet read Christodoulou’s book Making Good Progress, then you are highly likely to fall into common assessment traps), but issues with the validity and reliability of assessments are not actually at the heart of the problem I want to get at here. The problem, rather, was that mark schemes do not describe the journey one needs to go on to get from one level to the next.
The best analogy for this is, as Christodoulou explains very well, the running of a marathon. One does not get better at running a marathon by running marathons. Although running a faster time in one’s second marathon might indicate that someone has ‘got better’ at marathons, this fact tells us nothing about what journey our athlete had been on to produce that better outcome. This is true of all summative assessments. What these do is aim to capture how well a specific performance has been done. How good is this specific answer? How good is this essay? How well was that piece of Chopin performed? Mark schemes – particularly that are description based – tell us what distinguishes stronger from weaker performance, but it gives no account of what has caused the stronger or weaker performance.
This becomes particularly problematical when mark schemes (e.g. National Curriculum levels, or GCSE mark schemes) are used to plan teaching. I’ll take an example from history. Let’s say that a really important part of writing a decent essay answering the question “Why did the First World War break out in 1914?” is having a good understanding of the Schlieffen Plan. It would be difficult to imagine a student writing a persuasive answer to that question who did not understand the implications of the Schlieffen Plan. Now a generic mark scheme (e.g. National Curriculum levels, or the generic criteria for an exam mark scheme) would not even mention this. Even a more detailed question-specific mark scheme is unlikely to specify in much detail what someone needed to know about the Schlieffen Plan to answer the question, and almost certainly would not account for what prior knowledge was needed to make sense of the Schlieffen Plan (e.g. the geography of western Europe, the prior relationship of the UK and Belgium, the transport infrastructure of Germany and Russia, and so on). A Kate Hammond showed very well, there are layers and layers of knowledge that sit behind good exam performance in writing, and it is simply not possible to account for all of that in a mark scheme.
This is why mark schemes (i.e. accounts of summative performance) cannot tell us as teachers what to do to help a pupil get better at our subject. A well-designed summative assessment might be able to tell us with some degree of accuracy whether or not a pupil has got better at something, but that assessment cannot inform how to get better.
It was this line of thinking that Christine and I were going through in thinking about why schools were continuing to make a hash of assessment in the post-levels world. In short, schools were continuing to use summative mark schemes as models of how to improve at a subject. That, we concluded, was fundamentally flawed. If not the mark scheme, something else must be describing the journey that someone has to go on in order to get better. I seem to remember that – as is often the case when you are working on something hard with someone with whom you do a lot of hard thinking together – we came to the realisation at the same moment: what is it that describes the journey one goes on to get better at something other than the curriculum.
A curriculum is too frequently understood to be simply a list of things to learn and indeed, by some definitions, that is all it is. But Christine and I, and I think most teachers would share this assumption with us, saw a curriculum as having a temporal dimension: it did not just set out what was to be learned, but it provided a sequencing to that. Christine has far better ways of describing curriculum than I do and I can do little more here than point you towards her work on this, particularly the analogy of an opera or a novel. The fundamental point is however not actually that complicated: a curriculum sets out the journey that someone needs to go on to get better at the subject. In short, it models the progress that we would hope (although cannot guarantee) that someone will make. The curriculum is the progression model.
Five years or so on from this realisation, and I am not sure the world has really moved on much at all. A faint glimmer of hope can I think be seen in Ofsted’s new framework where the shift away from using data to track ‘progress’ and towards judging the quality of curriculum means that inspectors are now supposedly less interested in tracking progress through mark schemes, and more interested in asking questions of the curriculum such as ‘why this?’ and ‘why then?’, which both point towards a consideration of how a specific point on a curriculum is part of a progression model. But this is a deep-rooted thing to shift, and the importance of summative attainment invites us to fixate on the thing at hand (e.g. why is this work a Level 3 and not a Level 4) rather than to put reflect on the kind of curriculum that results in a higher proportion of pupils writing Level 4 answers.
Regular readers will know that I am currently engaged in the task of trying to answer the question ‘why is England’s education revolution faltering?’ The crass answer here would be “because people haven’t listened to me and Christine”, but I actually want to point to a broader issue. England’s education revolution involved a fair degree of scorched-earth critique in which old assumptions and norms were cast aside. In their place have come some complex ideas attached to simple phrases: in addition to the one covered in this post, I offer you “knowledge-rich curriculum”, “teacher-led instruction” and “the cultivation of schema”. These terms are now widely used, not least in preparing for the new Ofsted framework, but a changing currency of terminology does not necessarily mean that long-standing assumptions have been reconsidered in reality.