Two types of ‘What Works’

measuring scale

One of the potential benefits that many of us read into the approach of using evidence, evaluation and research is that it will tell us ‘what works’. It is a seductive promise; that we can know for sure what things to implement and what to avoid.

Unfortunately the reality is rather more complex. Who something works for, the context in which it works, and indeed what ‘working’ actually looks like are all hugely contested. In education the outcomes of activities themselves are also so contested we need to be specific about what we are trying to achieve before deciding whether an approach ‘works’ or not.

These complexities aside for now, there is another nuance to discussions of ‘what works’ that I found it useful to surface when recruiting schools to Nesta’s Random Controlled Trial on Math’s tutoring.

As part of this process we brought together many schools across the country to present to them the technology enabled tutoring we are testing. We also needed to explain the process of taking part in an experimental trial. They had to sign up to be randomly selected to either try this tutoring for many months, or to continue as usual in our control group.

For many of them the choice boiled down to whether or not they wanted to implement the tutoring in their school. The question asked to inform this decision was ‘does it work?’.

My first reaction was to say ‘we don’t know yet’, after all that is why we are running the trial. The aim is to find out whether it has a statistically significant impact on children’s Maths grades.

However, I quickly realised this was not really what many were asking. What they wanted to know was whether the technology connecting the remote tutors to their tutees was reliable. Would the teachers would be given unreasonable burdens of work by the process? Would the children be able to engage with the whole process or disengage and misbehave.

For interventions in schools, and I suspect most other contexts, there are two kinds of ‘what works’.

1. Does it work practically?

2. Does it work in achieving its outcomes, in this case educational benefit?

Whether something ‘works’ in a school is a combination of these two. It is all very well having an intervention or technology that is highly tuned to work in the educational sense, but if it will not work practically in real schools then it can never achieve that potential.

Perhaps more dangerous is having something that works very well practically but not educationally. . An ‘educationally perfect’ initiative that fails to be practical is likely to be dropped fairly quickly. Something that works very smoothly but has little educational benefit can keep on going. It could steal time and opportunity for something with a greater educational impact to take place.

I think this scenario is a particular danger with technologies in schools. With developments in user interfaces making new technologies seem to integrate with the way we do things ever more smoothly, it is increasingly easy to see them as ‘working’ in the practical sense. Such lack of friction in the day to day can make it hard to question whether they work in the educational sense.

This is exactly why we are running our experiments into technology in education. Some technologies, like ‘The Visible Classroom’, are based on the soundest of educational theory from experts in the field. Developing them at an early stage to work practically in the complex environment of the classroom is the work our collaborative team have been doing. Other technologies such as ‘Remote Tutoring’ are already scaling across a large number of schools. They work in practice and appear to have educational benefits. Now is the time to rigorously evaluate the effect they have on young people at scale.

When looking for ‘what works’ we need to be mindful of these two sides of the question. Whilst they may develop in different cases at different rates, either one without the other gives us something that will not live up to it’s potential for impact.


Photo Credit: Thomas Hawk via Compfight cc







3 responses to “Two types of ‘What Works’”

  1. Steve Philp Avatar

    For me an extension to question 2 is: “does it stop other things working?”

    As a teacher at a participant school in one of the projects you mention I was rather depressed when my headteacher asked me the question: “you already have 3 outstanding teachers here, why are you trying to develop them?” I think what he was really trying to say is that there are a lot of things going on right now – is this really the right time, the right place for this new project? Or maybe he really does think that once a teacher reaches a certain point they don’t need to get any better.

    Either way, I found this intervention had a negative impact on my motivation to see the research project through and I’m sure it had a negative impact on the teachers too.

    I am keen to find out more of ‘what works’ but am not sure if the majority of my colleagues in the teaching profession want to ask the same question. I’m also not sure of how to have the conversation that gets us all into that place…

    1. oliverquinlan Avatar

      Thanks for your comment Steve, it’s really valuable to hear the views from those in a different role in these research projects. I hesitate to say ‘on the other side’ because I think it’s important that such projects are collaborative although obviously different collaborators have slightly different roles. So much is happening already in schools it is almost inevitable that something new replaces something current or old. I think it’s an important point you make to notice this and make sure we are replacing that which could be replaced rather than something really beneficial. Some of the EEF projects showing no impact link to this I think- it is as important to know what we can stop doing as it is to know what we should be starting anew.

  2. […] Two types of ‘What Works’ | Oliver Quinlan […]

Leave a Reply

Your email address will not be published. Required fields are marked *