New Assessment: Moving towards evidence based teaching

June 23, 2012  |  Featured, technology  |  Share
“Our exam system isn’t a gold standard; it’s a legacy system”
-Paul Kelley

In a seminar today at ‘The Sunday Times Education Festival’, Paul Kelley discussed the assessment system and how we might utilise the data and analytics so many teachers despise for actually moving towards evidence based teaching that can make a measurable difference to learning.

Our data says that we are getting better, with grades at A level increasing each year. However, PISA and OEDC studies show a different story. The external measure of the British system, Kelley claimed, say that literacy is going down.

Columbia, Georgian, Norway and New South Wales are using new technologies to drastically reduce costs and timescales associated with exams, as Ewan McIntosh reported some time ago…

Cost is one thing, but what about speed? When exam results come immediatly rather than in a brown envelop months later, could the summative assessments often accused of being of little use to learning actually become formative?

He shared John Hattie’s evaluation of the effectiveness of interventions. These results are suprising, and some teachers in the room obviously disagreed with some of them, in particular the evidence that ability grouping has no benefit to learning. Many of these findings don’t ‘feel right’ to teachers, both new and experienced. The question we need to ask is do with go with evidence to shape our practice or gut feeling?

Screen_shot_2012-06-23_at_13

Kelley argued that when you have rigerous, regular and quick assessment, you begin to move to place where educational practice can be specifically evaluated in terms of what makes a difference. At present, he characterised our educational system as where medicine was in the days of the leech; we are applying things based on a hunch they will work.

It strikes me that the biggest problem we have here is engaging school teachers with research, so that their criticism can move from it ‘not feeling right’ to more critical questions of methodology and validity. In our current system there is not the time or the culture of engaging with this kind of work in a sustained, embedded way, and until there is we will continue to work on our hunches despite being told the ‘evidence’.

If you enjoyed this post, make sure you subscribe to my RSS feed!

Related posts:

Clarifying technology in schools: 3 ways
Technology is like History; it makes us think
Collaborative tools breed collaborative thinking
 

3 Comments


  1. Completely agree. For me there are a series of interlinked problems that need to be addressed.

    1. Most of the research I encountered on my PGCE made very little link to practical strategies (what I was really looking for at that point) and a frightening number committed to little more than ‘and this needs further research’. As a result teachers are introduced to the idea that research is both impractical and inconclusive.

    2. I’ve lost count of the number of times I’ve heard the expression ‘research shows’ banded around on inset, with little reference to what the research was, or where I could go to find out more. Given the whole ‘Brain Gym’ fiasco of a few years ago, many teachers have simply lost confidence in the whole concept of educational research

    3. Much of what has been shown recently (and we can take Hattie as an example) does fly against what we insticntively think. For me, it was how low the impact of teaching assistants was, as I’ve worked with many who have made a massive difference to students for whom barriers to education would have been too great to overcome.
    We need to be able to use these points to start conversations though, and this is something I see little of. Research is either accepted or dismissed out of hand. In the case of classroom assisants for example, across a whole cohort they have little impact, but on specific, targetted students they may have much more? We need to examine the evidence (rather than simply commit to more research!) and be open about our choices and the reasons for them.

    Great post. Looking forward to reading more of your reflections from the day.

  2. As a former research scientist, I will always challenge the ‘research shows’ argument and ask to read the original paper, but I realise that I will be in the minority with this approach. I’ve enjoyed linking theory to practice during my PGCE but wonder how much time I will have to devote to keeping this up once I start teaching. Ideally, I would love to read blogs that pick out interesting research, and summarise the findings so I could filter out the key papers of interest that may impact my teaching. Or an online journal club once a term (I know scientists and medics who have a twitter journal group run on a hash tag like #ukedchat) may help to develop an interest in the evidence, which in turn would perhaps encourage teachers to start thinking about being involved in research themselves?

    On a slightly different point, I haven’t read the Hattie work, and will go and see if I can get hold of a copy at the library before my university access runs out, but the title alone worries me – meta analyses are notoriously difficult to conduct and very easily subject to bias. You made an analogy to medicine, and here meta-analysis is used a a great deal but it is a powerful instrument, to be deployed and interpreted with great care, e.g.: http://www.ccjm.org/content/75/6/431.full I’ve found the educational literature very difficult to get a handle on having come from the quantitative world of science and can see why meta-analysis is attractive to get large effects. Surely, the only way to be sure of measurable effects is to employ a statistically rigorous approach with large original datasets?

  3. Agree with Jo and Dave. It is better to go with your hunch than to pay homage to quack research (of which there is a lot about).

    Anyone who says “research shows…” should immediately be asked “what research?” – and if they cannot provide convincing details of the research they are referencing, they should be booed out of the room as a charlatan. So often, when you bother to follow up references, you find some completely half-baked piece of work, depending on attitudinal surveys, which does nothing more than collate the hunches of various interested parties.

    That said, most of the conclusions on the slide pictured above feel about right to me – at least in the positive corner. And the conclusion that quick and constant assessment is critical strikes me as being a prime example of common sense. My hunch is that we need to move away from making the distinction between formative and summative assessment (indeed, between assessment and teaching). Everything should be assessed constantly – just as we keep our eyes open when driving a car.

    In the negative corner, I would be more sceptical as I suspect that the picture is pretty complex, especially when some interventions have a punitive or divisive element. It would be easy to show e.g. that a poor OFSTED inspection had a demoralizing effect – but the overall effect of the inspection regime is much broader than that. The only way of researching the overall effect of OFSTED inspections would be to compare inspected schools with schools which are not under any inspection regime at all – and I am not sure where you would find any of those.

Trackbacks

  1. Learning highlights for 2012 | Oliver Quinlan

Leave a Reply


Creative Commons Licence

Oliver Quinlan: Learning, teaching, technology by Oliver Quinlan is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. Based on a work at www.oliverquinlan.com.Permissions beyond the scope of this license may be available at http://www.oliverquinlan.com/blog