One of the potential benefits that many of us read into the approach of using evidence, evaluation and research is that it will tell us ‘what works’. It is a seductive promise; that we can know for sure what things to implement and what to avoid.
Unfortunately the reality is rather more complex. Who something works for, the context in which it works, and indeed what ‘working’ actually looks like are all hugely contested. In education the outcomes of activities themselves are also so contested we need to be specific about what we are trying to achieve before deciding whether an approach ‘works’ or not.
These complexities aside for now, there is another nuance to discussions of ‘what works’ that I found it useful to surface when recruiting schools to Nesta’s Random Controlled Trial on Math’s tutoring.
As part of this process we brought together many schools across the country to present to them the technology enabled tutoring we are testing. We also needed to explain the process of taking part in an experimental trial. They had to sign up to be randomly selected to either try this tutoring for many months, or to continue as usual in our control group.
For many of them the choice boiled down to whether or not they wanted to implement the tutoring in their school. The question asked to inform this decision was ‘does it work?’.
My first reaction was to say ‘we don’t know yet’, after all that is why we are running the trial. The aim is to find out whether it has a statistically significant impact on children’s Maths grades.
However, I quickly realised this was not really what many were asking. What they wanted to know was whether the technology connecting the remote tutors to their tutees was reliable. Would the teachers would be given unreasonable burdens of work by the process? Would the children be able to engage with the whole process or disengage and misbehave.
For interventions in schools, and I suspect most other contexts, there are two kinds of ‘what works’.
1. Does it work practically?
2. Does it work in achieving its outcomes, in this case educational benefit?
Whether something ‘works’ in a school is a combination of these two. It is all very well having an intervention or technology that is highly tuned to work in the educational sense, but if it will not work practically in real schools then it can never achieve that potential.
Perhaps more dangerous is having something that works very well practically but not educationally. . An ‘educationally perfect’ initiative that fails to be practical is likely to be dropped fairly quickly. Something that works very smoothly but has little educational benefit can keep on going. It could steal time and opportunity for something with a greater educational impact to take place.
I think this scenario is a particular danger with technologies in schools. With developments in user interfaces making new technologies seem to integrate with the way we do things ever more smoothly, it is increasingly easy to see them as ‘working’ in the practical sense. Such lack of friction in the day to day can make it hard to question whether they work in the educational sense.
This is exactly why we are running our experiments into technology in education. Some technologies, like ‘The Visible Classroom’, are based on the soundest of educational theory from experts in the field. Developing them at an early stage to work practically in the complex environment of the classroom is the work our collaborative team have been doing. Other technologies such as ‘Remote Tutoring’ are already scaling across a large number of schools. They work in practice and appear to have educational benefits. Now is the time to rigorously evaluate the effect they have on young people at scale.
When looking for ‘what works’ we need to be mindful of these two sides of the question. Whilst they may develop in different cases at different rates, either one without the other gives us something that will not live up to it’s potential for impact.