It is very widely accepted that a teacher ought to know a pupil’s predicted grade. Although I think Ofsted don’t specifically require this, I have heard so many times (from school leaders and inspectors) that a teacher ought to know the predicted grade of every pupil in his or her class.
The argument, of course, is that the teacher can then make sure that the pupil is ‘on track’ to meet that target grade. If a pupil is predicted an A, but is looking like getting a C, then the teacher knows that he needs to intervene. If a pupil is predicted an E, but is looking like getting a C, then the teacher can feel confident that the pupil is doing well. More generally, it is widely accepted that ‘knowing your pupils’ means knowing a wide range of things (their socio-economic background, any educational or medical needs they have) and the predicted grade is one part of that package.
Yet there are a whole host of reasons as to why a teacher knowing a predicted grade is a bad idea. Perhaps most importantly, this knowledge creates a cognitive bias on the part of the teacher: a teacher might, for example, have lower expectations of a pupil predicted a Grade C than a Grade A. Many schools ask teachers to enter a ‘teacher assessment’ grade every term / half-term / week / minute which is purportedly based on classroom work: in practice I know teachers will generally look at the predicted grade on the system being used, decide if that is roughly correct, and if so tick the box that says ‘on target’. Couple this with grades being used as an accountability measure, and it is no small wonder that teacher assessments are so famously unreliable.
Then there are the wider issues with the data themselves. Predicted grades come with a wide margin of error: I think at best we might be able to say something like “there is a good chance that this pupil will get somewhere between a Grade D and a Grade B”, a range unacceptable to modern accountability measures. There is an often overlooked subject dimension: predicted grades based on KS2 Maths and English results tell a teacher nothing about what history, geography, French, R.E., or literature a pupil knows, and until we get some kind of meaningful baseline test for all subjects at age 11, teachers outside the core subjects are essentially relying on a correlation between pupil IQ (or socio-economic background) and GCSE grades to tell them what grade any given pupil is likely to get.
When it comes down to it, any teacher in any subject really needs to be able to answer just one question: what does this child already know about my subject? I’ll allow that word ‘know’ to encompass a wide range of things here (including ‘knowing how to do something’). A predicted grade will not tell a teacher this: the only thing that will tell a teacher this is a variety of assessment types that build up a picture of a child’s knowledge base. The even greater risk, of course, is that teachers, under pressure, might begin to treat the predicted grade as a proxy for what a child already knows.
So here is a vaguely radical idea that might be unworkable in practice. Why don’t we stop classroom teachers focusing on predicted grades? It simply distracts them from more important questions. We don’t need SLT asking teachers “what is x’s predicted grade?” We need SLT members asking the teacher “what parts of your curriculum have not yet been mastered by all of the pupils in your class?”
Of course the wider question is one of whether or not it makes sense at all to hold schools accountable using predicted grades. All value-added measures rely on these predictions, including the new Progress 8 measure, and people’s careers (especially headteachers) can be ruined if these numbers do not add up. One of these days the whole thing will be laid out in court off the back of an employment case and the Emperor’s New Clothes of predicted grades will be revealed for what it is. But that is for another blog post…
 As a little aside, I often find I need to explain this problem to teachers of core subjects. Consider two pupils.
Pupil A and Pupil B attend the same secondary school. Pupil A has been taught no history at all in primary school, but is given a prediction of GCSE Grade B off the back of her KS2 English result, which she then achieves at the end of Year 11. Pupil B has been taught history amazingly well at a different primary school, and is also given a prediction of GCSE Grade B off the back of her KS2 English result. She too gets a Grade B at GCSE. Now according to the value-added measures, the secondary school has done equally well with both pupils. In practice, of course, the secondary school has done very well with Pupil A, but has probably not done so well with Pupil B. In practice, the predictions in these cases fall back on something like:
- KS2 English result is correlated with IQ and socio-economic background.
- IQ and socio-economic background are correlated with GCSE History result.
- Therefore the KS2 English result is a good predictor of GCSE History.
There might be some statistical validity lurking behind this, but it does not take long to realise that, particularly when applied to individual pupils, it’s a ridiculous conclusion to reach.