Posted by: mrlock | June 4, 2016

Assessment – simple but not simplistic – part two

In Part one I gave a broad outline of our proposals on assessment. I hope that they will put assessment in the hands of the subject professionals in our school, enabling them to truly assess pupils’ knowledge so that they can teach as effectively as possible, and ensuring that development of the curriculum and assessment are intertwined.

This is tempered somewhat by competing pressures of ensuring that we regularly report to parents, and that we ensure that we as leaders know whether pupils in the school are making progress in each subject.

The most important developmental work we are doing in our school this year is specifying the knowledge that pupils should gain in each subject, and establishing the best sequence in which they might do so. Assessing that in order to ensure pupils are learning is very important, and ensuring parents are aware is equally important.

This blog is an attempt to represent our Head of History Matt Stanford’s presentation to our staff, which I’ve since repeated to governors, on how we might report to parents and use our reporting system to support our development of world-class provision.

A question for you: is this pupil doing OK?

Screen Shot 2016-06-04 at 21.57.02

Most professionals will correctly say “we don’t have enough information”.

What lies behind the orange C that Miss Underwood has awarded for History? Well there is a simple grade descriptor, that relates to the curriculum:

Screen Shot 2016-06-04 at 21.57.56

That simple grade descriptor is not simplistic though – so here is Matt’s first efforts (that I endorse – I’m not passing on responsibility) at explaining some of the terms used:

Screen Shot 2016-06-04 at 21.58.43

To use these descriptors, teachers are asked to use the mark book (designed with their subject and curriculum in mind), but also, crucially, their professional judgement. We can’t emphasise this last point enough:

Screen Shot 2016-06-04 at 21.59.26

Behind the mark book, according to colleagues subjects’ will lie task specific mark schemes like the one below, and crucially, their professional judgement:

Screen Shot 2016-06-04 at 22.00.13

Behind those task specific mark schemes will lie the nature of the subject, the professional literature in the subject, the teacher’s expertise in the subject, probably the national curriculum and probably the requirements of Key Stage 4 courses and beyond, and crucially, the teacher’s professional judgement.

We will expect teachers to be able to justify the grades awarded to each pupil. The evidence that teachers use to make these justifications depends on what enables them to do so best, but will largely be drawn from what is written above.

So is the pupil in the made up example above doing OK? Well in History, maybe. But what about the orange?

Well the colour code is the pupils’ attitude to learning (we might just make it a separate word or number, but at the moment it’s a colour code). Behind the colour code is the data the school has – for example were they a C last time, and the time before? Is their attendance to History any good? Do they hand in homework? How are they doing compared to their reading age or Key Stage 2 SATs fine score? And crucially, the teacher’s professional judgement. It might look like this:

Screen Shot 2016-06-04 at 22.00.58

And they represent these criteria:

Screen Shot 2016-06-04 at 22.01.48

The bottom two of these should act as a flag or a warning signal.

The flag will signal the start of a professional conversation. It will never be used to grade the teacher. As soon as these become high-stakes, they lack any semblance of reliability and validity.

So is the pupil doing OK?

Yes. But (for example) his HoY had a useful and friendly conversation with his History, English and PE teachers.

Miss Understood is going to move him to the front and Miss Pelt is going to think about how she can plan the next unit in a way that provides more access for that class.

Mr Ball is going to send him to county rugby trials:

Screen Shot 2016-06-04 at 22.02.27

Screen Shot 2016-06-04 at 22.02.35

This is our proposal for reporting on our curriculum to our parents. We have been careful to attempt to separate assessment of work from assessment of students, and to not confuse assessment with reporting.

It does depend on very robust quality assurance processes throughout the school, but I do not want to get them mixed up with assessment, thereby engendering some lack of validity or reliability.

There’s plenty more to do to make this better and make it work – please do feedback.

Full presentation here if there’s anything you couldn’t read:

Full Grades Proposal Presentation

Also: if this kind of development – of development of a great curriculum, of valid and reliable assessment for our school rather than for inspection or for SLT – is for you please have a look at the vacancies on our website.

 

 

 

Advertisements

Responses

  1. What are the expectations measured on? End of year for all or progress for individual or?

    Like

    • The expectations slide refers to pupils’ attitudes to learning. It’s one measure of whether we think, in our professional judgement, that A, or B, or whatever, is one that is a ‘good’ grade or not for that pupil.

      Like

  2. I like how you outline the actions that will be taken – is this included in your reporting to parents or is it recorded internally? Does this link to monitoring?

    Like

    • Monitoring – of course in terms of knowing how well pupils are doing – but for teachers only in so much as teachers should be able to justify the grades awarded. Poor (or good) awarded grades will not be used to judge a teacher.

      Like

  3. This is interesting and really promising I think Stuart. I particularly like the slide breaking down what the report means clearly – very helpful. What I’m less clear about is the robustness behind the process of defining what students should/should not know at each stage. Apologies if I’ve missed this, but I imagine that making robust curricular choices in every year group (and ensuring content is revisited) is essential to making these reports valid.

    Like

    • Thanks for the comment Harry.

      Yes, this grew out of our curricular conversations. We found we could go no further while we were constrained by the legacy of our old assessment system and its perverse demands. In one feedback session and head of faculty asked me directly “are we going to continue with the assessment requirements we have this year”

      My reply was obviously no, but I didn’t know what we were going to do.

      We’re intending to publish, to parents, students and staff, what we expect pupils should know in each year (KS3) in each subject in September – it’s been the focus of much our work since September.

      One of our requirements will be to assess and report on the curriculum in (say) year 8 should be based on what pupils should know from year 7 AND 8 rather than restricted in what becomes an extended modular fashion.

      Like

  4. Sounds sensible to move ahead with both aspects simultaneously. I look forward to seeing this come together and I think it’s really helpful that you’re sharing this freely in advance!

    Like

  5. Reblogged this on The Echo Chamber.

    Like

  6. Hi Stuart, your plans do sound very similar to the way we assess and track at my current school so I’ve had a bit of a think about what is similar and different and weaknesses and strengths. We use A/B/C etc backed up by very similar broad descriptors to the ones you have produced. This works very well and I think its biggest strength is that it does not claim to have robust validity – just enough to be useful. I think we can either hide behind ostensibly accurate progress stats that do not have the validity they claim or we can use a system which openly acknowledges the inevitable weaknesses with ongoing teacher assessment. The latter option creates fewer distortions and we can still strive to ensure we do what we can through regular (termly?) cross year assessments that can be used year on year, use of comparative assessment for these tasks etc etc. Personally I’m unconvinced that moderation by someone uninvolved in the setting of the original tasks would prove very useful beyond a most basic quality assurance.
    You are right that it is absolutely crucial that these stats are not used in any way for performance management and I wonder if it will take a while for teachers to change mentality, even when assured this is the case. At my school there is no link and teachers do feel very free to give poorer grades with no concern they will be judged for it. They do know they will be judged by final results though. We do find that there is still creeping ‘grade inflation’. I think this is for a number of reasons. Teachers of option subjects see that the kid is getting higher grades in another subject and don’t want the child to think they are better at that subject and opt for it instead. There is also a desire among many to encourage the child and we do find children come back to teachers complaining about grades they think are unfair which I think can encourage gradual inflation.
    Similarly for what we call ‘effort grades’ (1-6 for us) the 4 is for your orange ‘slightly below our expectations’ and teachers are a bit unwilling to use it. It always ends up really meaning ‘well below our expectations’ with 5 being reserved for: ‘I have been tearing my hair out over this child’. This used to mean that we lost all way of discriminating between satisfactory effort and good effort. It helped when we added in an extra number to the scale. Now 3 means ‘effort is fine’ and there are two numbers above. So from our experience I’d probably add in another category at the top for very special and unusual levels of effort.
    The biggest difference between our system and your proposals is the way we use ‘effort grades’. They do provoke teacher action but we use them to directly hold the student to account for poor effort. If a kid gets a string of 4s or 5s for effort they get a serious bollocking from the pastoral staff and some further monitoring and possibly loss of leisure time etc etc. This partly explains why teachers are reluctant to ‘upset’ children by giving 4s because the child knows they will be answerable and so they don’t like getting them! Of course the pastoral staff do check the background to the 4s/5s before taking action and getting one or two would probably just provoke a brief conversation between the child and the form tutor. Without these grades one subject teacher would not take action just because a child was not very focused in class but when the pastoral staff see a trend across effort grades they are able to take action quite swiftly. I think the way our school tracking so squarely puts the responsibility for poor performance at the door of the student is the key reason we get such high VA. The pastoral meeting following the production of grades always involves a discussion of the effort grades of those students and agreement over which children are a cause for concern followed by conversation etc with those children. All conversation by teachers after grades come out tends to be over effort grades and is rarely about achievement as such. Any concern about the actual quality of teaching will have been identified separately and is entirely unconnected to the tracking process.
    Anyway I hope that helps and do get in contact if you have any questions.

    Like

    • Very helpful, thank you. I am less concerned about the use of the effort grades – you are quite correct that holding the pupil to account has to be the first thing we do on seeing these grades. I skated over that, but it’s actually really important.

      I am more concerned about the not claiming that they have robust validity. I need to think some more on this. I agree that we’re not proposing something that is as valid as possible – just moreso than levels (which isn’t hard) – but I also think that there is something to be said for groups of historians, or geographers, or mathematicians to look at what a piece of work or an assessment can tell you. Isn’t there? I need to think more on this.

      Like

      • I think the problem is with claiming there is an absolute standard you are judging against – it links to Daisy’s points about comparative judgement. The A/B/C is a judgement that is really comparative of the cohort and thus norm referenced. (though an experienced teacher I’ll gradually build some memory of standard over cohorts). Any external moderator will end up being comparative also and struggle to make meaningful comments about the absolute standard of the cohort.
        So I agree there is a huge amount cooperating subject specialists can contribute. They can work to make sure specific assessments really do judge what they claim to etc. and the final grade a teacher comes up with will be influenced by those assessments. Exactly WHAT that final grade says about a pupils’ ability is necessarily fuzzy or you leap straight back into the murky world of criterion referencing.

        Like

      • I don’t think we are. We’re saying that the end of year assessment will be moderated (and this relates to Daisy’s blog on teacher assessment under-rating disadvantaged pupils while tests do so a lot less) – if I’ve said otherwise I’m mistaken or have let my typing run away with me. We’re not intending to grade that from A-E. More likely a percentage, or something else. We haven’t actually decided.

        Like

  7. Hello Stuart

    Creating a systematic approach in a school to ‘Assessment Without Levels’ at key stage 3 is no easy job, and we can expect different solutions to it to emerge over time. We have spent over a year thinking and talking to schools about the issue. Initially, we thought that the problem was mostly about finding an alternative grading system to Levels, and I am embarrassed by how long it took us to realise that the solution to this problem really lies in ensuring that the school has developed a first class curriculum, and was teaching it well. At least – this is the conclusion that we came to, and, of course, there is nothing different here to what educationalists have always understood about effective curriculum delivery.

    Within this approach, the focus for assessment should be largely formative, i.e. gathering day-to-day evidence of how pupils are responding to what they are taught, and using a Mastery approach to ensure that pupils understood the ‘key concepts and big ideas’ of the subject, rather than a rush through a list of things to teach. A system is needed to help to gather summary information across classes and subjects (otherwise we could end up with hundreds of spreadsheets). If that system is designed well, it will offer a simple way to record summaries of formative assessment. IT should also be clever enough to do the rest, i.e. determine how the summaries of formative assessment are indicative of future attainment.

    All of the above is exactly what the report from the ‘Commission for Assessment Without Levels’ suggests. The solution to finding an alternative grading system to levels lies in the notion of ‘Mastery’ together with the idea of ‘GCSE-readiness’, but teachers needn’t worry about this because the system used should sort this out, whilst allowing them an editorial role. All teachers should need to focus on is the quality and impact of their teaching, and from time-to-time recording summaries of how pupils are responding to what they are being taught.

    The bigger picture of all this can be seen in the presentation which I gave at the recent SSAT Data Leaders Conference which can be seen at http://www.4matrix.org/ssat. At this event I asked for a show of hands of those schools that felt their school had developed a good scheme for teaching the new National Curriculum effectively at Key Stage 3. The response was 2/126, and here lies the revelation. Relatively few schools have managed to navigate the confusion regarding the new National Curriculum – for example, whether a school needs to teach it, and how they can assess it if Levels have been scrapped. Yet there is no spotlight shining on key stage 3 which might create more urgency in solving these issues, and there are few voices speaking about the importance of developing foundation knowledge at KS3 which might better underpin high attainment at KS4. Instead, the accepted wisdom about school improvement remains geared to fixing things too late at KS4.

    The report from the ‘Commission for Assessment Without Levels’ provides the criteria for judging the effectiveness of any school’s chosen solution to assessment at Key Stage 3. Relatively content-free approaches which recreate a similar grading system to Levels may be missing the point that it is what we are teaching and how pupils are responding to it that should form the basis for any assessment system.

    Like

  8. Hi Stuart, just catching up on this and the comments.

    – I really like that you will not automatically generate a grade from the teacher mark book. There have been a number of attempts to try and create a kind of algorithm which does this over the years, but they are all hugely problematic because the minute you try to do this, you then either a) impose a standardised straitjacket on all classroom assessment or b) start using unstandardised and inconsistent information to attempt to derive a consistent grade. Using professional judgment definitely the way to go.
    – However, the question then is – how do you ensure those professional judgments are accurate and consistent, particularly given all we know about bias etc. I would suggest using the information from the annual standardised assessment to cross check / calibrate / standardise judgments, with the aim, in the long run, of ensuring that judgments across teachers and indeed subjects are consistent – so that we can move away from saying ‘well, he got an E in maths and a C in English, but the Maths paper / judgments were harder, so that’s probably about the same’. So how do you get professional judgments that are informed by classroom information but judged on a consistent scale? I think you need a mix of moderation and statistical information from the annual standardised test. It’s not an easy question but it seems like you have all the elements in place to get the answers – good luck!

    Like

    • Thank you for the comment Daisy. As you can imagine, lots of our thoughts have been influenced by you. Matt, Geraint (Assistant Head) and myself met last week and we’re going ahead but we implicitly agreed that we’ll need at least as much thinking and reflection over the coming year as we’ve had this year.

      Like


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

%d bloggers like this: