This is our efforts at moving beyond the ‘crutch’ of National Curriculum levels at Cottenham Village College (CVC). It contains a proposal that emerged from an SLT meeting. I emphasise that it is a proposal.
I’m blogging about it hoping to harness some wisdom from colleagues and hence some feedback that might enable us to make it better before we launch it.
In my view parents, staff, SLT, governors and inspectors have been duped for years into believing they know how well pupils are doing by looking at assessed national curriculum levels in each subject and comparing these with expected ‘flight paths’. We have had a similar system, with data gathered every six weeks. Our assessment has been driven by this system. This system has been driven by a perception that it might be judged by inspectors. In my view it’s wrong.
One colleague, Head of History Matt Stanford, had previously done some work proposing a different assessment system that works for History prior to my arrival at CVC in September 2015. Having heard about this, he was also a part of the SLT meeting. He had significant input into this but it’s fair to say it isn’t one person’s work alone.
It is no accident that our most extensive developmental work this year has been in identifying the knowledge we want pupils to gather in each subject area in each year – particularly in Key Stage 3, and in sequencing that knowledge. Basically we have started with the curriculum and challenged ourselves to specify the knowledge that pupils are expected to gain by attending our school. You can read more about one training session that launched that here. You don’t get very far into discussing the curriculum, though, without starting to talk about assessment.
The principles behind which we base our assessment come from reading Daniel Koretz’s excellent book Measuring Up. We have used Koretz in three different training sessions this year. Daisy Christodoulou has published an excellent three part review that covers lots on the book – and I often return to Daisy’s blog to revisit points meaning I don’t have to search through the book for them – as is the case below. I spoke about Koretz and the principles of assessment to colleagues prior to talking about our expectations from September 2016 on assessment.
I said, using Christodoulou and Koretz liberally:
- assessments can be and are used badly or misused
- assessments that are high stakes create perverse incentives – the more high stakes the less reliable
- some assessments are not helpful to our aims and values, in fact they can work against them
- the perfect assessment with perfect information doesn’t exist – and certainly nothing like it in a school setting alone
However, assessment gives us valuable information revealing more clearly to us who are the winners and losers.
- focus on what they can tell us about what is learnt from the curriculum
- have clearly defined purposes, including ensuring they are fit for purpose (can measure what we want them to)
- be standardised
- usually and as far as possible isolate specific knowledge and skills we want to measure
- not be the sole measurement we use
In summary, they need to be reliable and valid.
Reliability means consistency, and we should recognise assessments can be reliable but not accurate.
Validity means that it tells us useful information about what has been learnt. Validity requires that we know our curriculum and our subject, and that we accurately sample from the curriculum, and that we don’t test from outside of it. We also need to ensure we don’t teach to our assessments.
Assessment next year (proposal):
- In each subject we will assess what pupils know and what they don’t know. Subject areas decide how often, and how they assess the pupil in their subject. They are not expected to assess because data is due to be put into a spreadsheet. The purpose of this assessment is to add to our knowledge of how much pupils know, and what they don’t know so that we can teach them what they don’t. Teachers and departments will decide what their mark books should look like. SLT will not lay this down.
- We will report, three times a year, how well the pupils have mastered the curriculum (i.e. the knowledge we are expecting pupils to gain each year) and their attitude to learning. This will not be automatically generated from assessments. Teachers will use their professional judgement. We will award grades (from A through to E). The purpose of this is so that parents, other teachers and leaders, and governors can see how well pupils are doing, and to ‘raise a red flag’ if pupils are falling behind so that the school can support teachers and pupils. This information will never be used for performance management or capability.
- We will have an annual standardised examination in each subject (we are open to not having one in subjects where it might be claimed this won’t add to our information). This will be moderated across the department and maybe outside of the school if we can organise it. We will rank the pupils (though I’m not sure we will make these public – I’m open to discussion on this). The purposes of this are to cross-check the professional judgements above and to give pupils experience of terminal examinations.
- I’ve edited this to add that we will still gather predicted grades three times a year in Key Stage 4. The purpose behind this is to get some insight into pupils who may need support and because local 6th form colleges require these as standard.
In part two, I will go into more detail about how number (2) above might work. This is available here.