Defrauding the phonics check?

Next week, across England children in year 1 will undergo the Phonics screening check, a test portrayed by the government as a way of identifying those at risk of falling behind in their early reading skills. Today my colleague Pete Yeomans and contact Stephen Lockyer shared on twitter a post by Deevy Bishop from 2012 which claims that data from last year’s check “provides clear-cut evidence that the data have been manipulated“.

Deevy took the data released by the Department for Education, plotted it on a graph and revealed the following distribution:

Describing this as ‘a worryingly abnormal distribution’, Deevy analyses the results and claims that the unusual  looking dip followed by a huge cliff just after the ‘pass’ mark of 32 is the result of teachers and schools deliberately manipulating the scores of the children taking part.

My last foray into proper statistics is some years back in my A level Maths studies, so head over to the original post for a much clearer explanation than I could provide. I of course welcome those who are better informed in this field to challenge this if it needs to be challenged, as some commenters on the original have done.

So, if this is actually the case, why has this happened? It was my understanding from the media and government statements that the phonics check was meant to identify those who were in danger of falling behind and needed extra support so that such support could be provided. Of course nothing in UK education is ever as simple as that…

My initial research and discussions with a couple of head teachers has left me unable to pin down what the stakes actually are for schools. The government says the purpose of the test is to identify children at risk of falling behind so that additional support can be put in place. Even the NUT who oppose the check concede it is not currently going to affect league tables (although it has not been ruled out for the future). The results do become a part of the RAISEOnline tool that schools use for their own evaluation, and this does influence the areas explored by Ofsted inspectors when they visit. However, this is a very early test, and as long as strong provision is being put in place to ensure future progress and later outcomes are good then this hardly looks like a stick schools will be beaten with if they have a solid strategy for this issue that is appropriate to their context. What is actually at stake in the case of this test seems hard to pin down.

I am trying to understand what could drive the alleged widespread manipulation of this data, is it simply a fear that ‘whatever schools report may be later used against them’? Has education in England really come to the point where some schools would feel the need to defraud a statutory assessment just out of  general fear and uncertainty, or is testing all a game now anyway? Who knows, maybe (as the original post suggests, then dismisses) the test was designed that way. There is one for the education conspiracy theorists out there… 

 

Please do read the original post on this from Deevy Bishop.

 

References:

Deevy Bishop, Data from the phonics screen: a worryingly abnormal distribution.

Department for Education, Phonics screening check and national curriculum assessments at key stage 1 in England: 2012.

Department for Education, Phonics screening check FAQ; What is the phonics screening check?.

Department for Education & Ofsted, RAISEOnline.

National Union of Teachers, Phonics Screening Check.

 

Image: Deevy Bishop, produced with data published by the Department of Education.

If you enjoyed this post, make sure you subscribe to my RSS feed!
'Treat it like a job': Looking back to achieve more
'Celebrating' birthdays & unintended consequences

6 thoughts on “Defrauding the phonics check?”

  1. The stakes are not as high as for sats but the individual teacher can be judged by their head from these stats and it is clear from forums that there was concern about results. Plus, it is a very simple to interpret the responses of border line candidates more generously. Each individual teacher won’t think they are really cheating and will have no comprehension of what some generous decisions at the borderline will do to the overall stats.

  2. It’s a very interesting question, and also interesting to note that accusations of ‘cheating’ are coming thick and fast at the moment, which says something very stark about the attitudes to teachers and schools from on high.

    My take on this is explained here: http://suecowley.wordpress.com/2013/10/04/values/

    Enforced compliance with something where people struggle to see the point can make their values go all slippery. Human nature, basically.

  3. Without seeing the actual words, you can’t make assumptions about these scores. For example it may be that 32 words were straightforward cvc combinations but the last eight were cvvc or cvcv combinations which threw some children. Or that split digraphs were a specific problem. Or that eight words were bi-syllabic. It’s hugely inflammatory and telling that the interpretation automatically offered is one of cheating.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.