Sunday 9 July 2017

Changing The Teaching of Reading: Did It Work?


Disclaimer: Although this blog post is all about SATs data, it's not what I'm all about, nor has it been our singular focus this year. This blog post merely serves to analyse test data, however flawed it is turning out to be, in the spirit of transparency.

Regular readers will know I've blogged quite a lot about reading over the last year (slight understatement). I've developed ways of teaching reading (based on research, not just whim) which I've shared with others, and which others have used in their own classrooms. I've been eager to share my approaches because I've been excited about them, but there was always a question in the back of my mind: "What if they don't work?"

So, as results day approached, I was worried for more than just one reason: I hoped I had not let my own school down, and I hoped as well that I hadn't sent those who'd tried some of my ideas down the wrong path.

And because I've shouted loudly about how we've tackled our approach to teaching reading, particularly in year 6, it's only really right that I share something of our results with my readers.

I feel that first off I must signpost an excellent blog post by Mr. Jennings: 29% to 92%, a reading journey! The blog is an in-depth exploration of everything his school did to make a supremely impressive jump in their percentage of children reaching the 100 scaled score in the 2017 key stage 2 reading test. I feel honoured to have been mentioned in the blog post as part of Mr. Jennings' journey, but I now hail him as my new hero! What an achievement! I shall be learning from all the amazing work he has done this year.

Attainment

Well, we didn't quite get a 63 percentage point increase in our reading results - ours was a much more modest 21 percentage point increase, meaning that roughly half of our 60-strong cohort reached the magic 100 mark.

I was pleased with the increase - I've been told that a school working very hard to improve something can expect a 10 percentage point increase in results on average.

However, I was hoping that we would get 60% of children reaching the pass mark. Looking into the data, I found there were a number of children who were between 1 and 4 marks off getting 26 marks - these children would have brought the percentage of children reaching the 100 scaled score to my desired 60%. (I am hoping to have 3 of these children's scripts remarked.)

Progress

So, whilst our attainment was low, it wasn't a surprise. Our school's history and context (in short: Inadequate Ofsted December 2013 leading to academy conversion January 2015, 94% EAL, 37% disadvantage) means that our children haven't consistently had good teaching.

A review of historical data shows that only 22.2% of this cohort made average or above average progress during their time in lower key stage 2, and, as a result, only 5.6% of them were working at age related expectations in reading at the end of year 4. This figure was 26% (at ARE) by the end of year 5.

Our online tracking system (SPTO) takes a CTF file from the NCA Tools website and assigns points to the different scaled scores. Using this data, this 94.8% of this cohort have shown to have made average or above progress. In fact, 91.4% made accelerated progress. Looking at progress from official key stage 1 data shows that only 10% of children didn't make expected progress - at the beginning of the year that figure was 43% not making expected progress across the key stage.

So, even though our attainment results don't yet reflect the progress being made due to low starting points, there is considerable reason to believe that the approaches we took in the teaching of reading have had a positive impact on the children.

Test to Test

One or two of you may remember that I reported that before Christmas almost 50% of the cohort achieved the pass mark on the 2016 reading test. This is something that has caused me quite a bit of consternation: did no progress occur between December and May, given that the same percentage of children passed in May as in the December?

So, I spent some time looking into the data. Thankfully, I'd kept the scores from when the children tried the 2016 test as we wanted to see if taking the 2016 test was a good indicator of how well children would do on the actual test so that we could rely on using it as a practice paper in the future.

Positively, I discovered that:

  • two extra children 'passed' in May who hadn't passed in December (one child who had 'passed' in December didn't 'pass' in May).
  • of the aforementioned children who had scaled scores of 97, 98 or 99, all of them got significantly more marks (between 6 and 12) than on the 2016 test and all but one of them (the one who 'passed' in December but not May) got a higher scaled score, for example one child moved from a scaled score of 90 to 97.
  • most of our lowest attaining children, and our SEND children, made the most progress between the two tests: some of the most vulnerable children getting a double digit increase in both their scaled score and number of marks gained.
  • overall, children had made progress, some very impressive, from one test to the next, even if this did not mean that they achieved the 100 scaled score.
Interestingly, I also found that a number of children who 'passed' both the tests achieved lower scaled scores in the 2017 test than in the 2016 test, with an average of a -2 points difference. For some children it appeared that the 2017 test, although easier, with its raised pass mark was actually harder to pass than the more difficult 2016 test with its low pass mark.

So, is the 2016 test a good indicator of how well a child might do in the 2017 test? Yes, although some children may get an equal, or lower, scaled score on the more recent test as it could be considered harder to pass.

And does the fact that our percentage of children passing both tests despite the extra teaching time in the middle mean that our approach to teaching reading didn't work? I believe not as most children made progress between the two tests, gaining both extra marks and higher scaled scores - this was particularly evident for lower attainers. 

However, it draws attention to a group of children who were scoring in the 90s on the 2016 test in December who would have benefited from some additionality - this is the challenge for next year: what does that intervention look like? What do those children need in order to be more consistent when answering comprehension questions? Are there other factors that meant these particular children struggled to reach the pass mark, despite showing progress?

I hope that this blog post has been read with interest and without judgement - I only seek to be transparent. I am fairly certain that I can conclude that what we have done in reading has been successful on the whole, and that like most new approaches, now just needs some adjustments and additions. However, I do share this hoping that I might gain insight from outsiders on how else I might interpret the data and make conclusions - for example, if it seems to you that my approach hasn't at all worked, I'd prefer to know that and not waste any more time on it!

A small request: I'd be interested if you would share with me, publicly or privately, the increase in percentage you have experienced between the 2016 and 2017 reading results (between last year's cohort and this year's).

No comments:

Post a Comment