Assessing Assessment for Digital Making

Mention assessment in schools and most people think of tests. Memories of creaky exam halls and regulation stationary may come first, but there are several developments in assessment that can change how we think about discovering what students have learned.

Originally published in Issue 3 of Hello World: The computing and digital making magazine for educators. Available free at helloworld.cc (Shared under Creative Commons CC BY NC SA).

Back in the late 90s Black and Wiliam challenged educator’s views on assessment with their seminal work ‘Inside the black box’, which popularised the idea of ‘assessment for learning’. They suggested an approach where assessment is brought into the learning activities themselves. When this is well executed students are given feedback as they learn and have the opportunity to action this feedback immediately. It’s about the teacher not just delivering and then assessing later, but regularly checking for understanding and adapting their teaching. It’s also about the learners getting regular insight into how they are learning, and crucially having an opportunity to action the feedback they get and ensure they are making progress. The assessment is designed to serve the students’ learning, and not to certify they have achieved a set standard.

This work led to many developments in schools, with the government in the UK taking up ‘Assessment for Learning‘, teachers being trained to regularly assess students learning within lessons, and students being provided with feedback to action as they learn. More recently this approach has been supported by Hattie’s meta analysis of influences on achievement in schools, which puts feedback at the very top in terms of measured effect size.

Truly effective formative assessment is not just about finding out whether students have got it yet. It’s about understanding how they are thinking about a topic, what misconceptions or naive understandings they have and how your teaching or their activities can be adjusted to address this. Given the abstract nature of Computing the potential for misconceptions is very high.

Ed tech company Diagnostic Questions are seeking to address this with their online assessment platform. Diagnostic Questions look familiar at first; multiple choice questions with four answers. These kinds of questions have been given a bad reputation by some educators, but their quality all comes down to how the questions themselves are put together. If a question has a correct answer and three laughably implausible answers then it won’t be a useful tool. However, if all of the answers represent different levels of understanding, or common misconceptions, then the answer the student gives is useful even if it is the wrong one. Imagine being told after an assessment not just who in a class had got the right answers, but why those who got it wrong did so and potentially what misconception you need to address for each group of these students.

For the last 18 months Diagnostic Questions have been working with Computing At School and the Durham Centre for Evaluation and Monitoring to bring their approach to Computing. Project Quantum is a two year project to explore the potential of crowdsourcing Diagnostic Questions for the computing curriculum and using them for formative assessment. They have been encouraging teachers to add questions to the platform and use the questions that have built up already with their students to help them better understand their learning. This project brings the potential of a relatively new approach to assessment to Computing teachers, and a chance to better understand how students make sense of difficult topics.

Creativity, problem solving and original approaches are key to Computing, yet these things are very difficult to assess using traditional approaches. It’s very common in education to use criteria to assess how well students are performing. We might set them a programming problem and then tick off whether they have used a loop or an if statement, showing that they have understood those things. However, real life programming is often more about the elegance of the design of a solution. What if the most accomplished student doesn’t use the things you have on your checklist? This is a particular problem when using wide briefs for tasks or even open ended projects.

Comparative Judgement is a field relatively new to education practice that offers huge potential for this problem. It’s based on well established research that humans are relatively poor at making objective judgements about individual objects, but very good at making comparisons. Play a musical note to most people and ask them what it is and they will struggle. Play them two notes and ask them which is higher and they are likely to be successful. Repeat this several times, with a clever algorithm to keep track and present them with the right combinations and you can come up with a ranking. These rankings have been shown to be very reliable, even more so if you involve several people as ‘judges’.

This method has been shown to work well even for judging things we don’t have clearly defined criteria for, such as looking at working in maths and asking ‘who is the better mathematician?’. It can also be reliable even when the ‘judges’ are peers at a similar level of proficiency. This opens up some exciting new ground for approaching assessment of skills that involve approaching problems in an open ended way, and assessing complex skills without resorting to trying to pre define what successful students must do. Assessment organisation ‘No More Marking’ are exploring this approach in English and Maths with schools.

Assessment is all about students getting better at something, but it seems there are also some promising avenues for educators to get better at assessment.

More here:

Inside the Black Box

Diagnostic Questions

Project Quantum

No more marking

 

Photo Credit: Durley Beachbum Flickr via Compfight cc

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *