The following post is my seminar notes from a session I ran with Christine Counsell on why ‘summative’ and ‘formative’ assessment need to be decoupled, particularly in terms of not principally using summative marking criteria to inform teaching. I am particularly grateful to the history teaching community (particularly the so-called ‘History Pizza Assessment Group’ in Cambridge) and Daisy Christodoulou for helping me clear up my thinking on this matter.
The current prevailing model in formative assessment is to give pupils a task (an essay, an exam question, a piece of music to play) and then to judge their competence at that task using a series of levels, often based on either task-specific or generic descriptions of competence. Having ascertained how well a pupil performed on that task, we then identify what was absent from the performance: what should have been done that was not, or what could have been done better? The feedback we offer to pupils is then based on this analysis of deficit. Feedback is then framed in terms of ‘reaching the next level’: comments appear to take the form “in order to do this task better you needed to…” Let’s call this model of feedback ‘coaching’.
And, in some ways, this is fine. It is a particularly effective form of feedback when the person you are coaching already has the wider domain of knowledge and skills they need in order to make sense of the meaning of feedback.
The ‘coaching’ model of feedback is however less useful when the person you are teaching does not know enough to make sense of the feedback. To the contrary, the feedback you offer might actually be more confusing than helpful.
In order to help a pupil in this situation, you need to identify where the gaps are in the knowledge base of the pupil: your assessment is less one of “how well was the task performed?” but rather one of “what are the possible causes of this task being performed poorly?”
But herein lies the problem. The knowledge of your pupils cannot directly be seen: it can be inferred only from their performance in a task. But, paradoxically, the task you want the pupils to perform well in (the essay, the exam, the piece of music) might not be the most suitable task you need for identifying the causes of failure in that task. This means that (a) feedback you give on the performance (‘coaching’) might not be understood by the pupil and (b) you are not diagnosing what it is you need to teach or re-teach in order to fill the gap in the pupil’s knowledge base.
By way of example, a doctor might see that a marathon runner has collapsed half way through a marathon. But simply noticing the failure does not explain why the runner has failed. It would certainly be pointless at this stage to begin giving feedback to the runner on how to run better. Instead, the doctor is likely to carry out some tests: lung capacity, blood-sugar levels, tests for an infection, and so on. These tests don’t look anything like the final performance. What the doctor gets the runner to do (e.g. modifying diet before the run) also does not immediately look like the final performance. But this is precisely the kind of help the runner needs to do better next time.
So what might a diagnostic approach, rather than a coaching approach, to formative assessment look like? In short, it might involve the following process:
- what are the common causes of weak performance in the subject I am teaching?
- what tests can diagnose those causes most accurately?
All of this might be helpful when working with a class who are close to completing the final performance – e.g. a class of Year 11 pupils with a looming exam. But what about younger children? It would be even more pointless to start coaching younger children on exam performance: their knowledge base is even smaller, and therefore they are likely to be even more confused by the feedback. It is here that we need our curriculum to help us out. A curriculum sets out what pupils ought to know and, although a curriculum is not written simply to prepare children for tests, it nevertheless is the case that pupils who have learnt the curriculum well should be able to perform well in tests based on that curriculum. For younger children, the crucial question then is simply
- have the children learnt what is on the curriculum?
Here again the tools you need have to be fit for purpose. Some tools are good at determining whether factual knowledge has been learnt. Some tools are good at spotting misconceptions. Some tools are good at seeing whether knowledge of one thing has been sufficiently well connected to knowledge of another.
So what are the key take-away messages here?
First, there is something to be gained by coaching pupils on performance in a task, but this is most likely to be useful when they already have a good knowledge of what is being learnt. Feedback given that is based on assessment criteria therefore has a place, but its place is limited, and it can be counter-productive.
Secondly, diagnostic formative assessment is instead a matter of identifying what the causes of poor performance are, and then testing to see which of those causes is responsible for poor performance. These kinds of assessment will not necessarily look like the final performance: indeed, they might well look very different.