healthcare triage
Replication, Re-Analysis, and Worm Wars
YouTube: | https://youtube.com/watch?v=9SCFlYlNlLQ |
Previous: | What's a Fair Price? Price Gouging and Pharmaceuticals. |
Next: | Sep. 30, 2015 - LIVE |
Categories
Statistics
View count: | 27,649 |
Likes: | 877 |
Comments: | 37 |
Duration: | 08:20 |
Uploaded: | 2015-09-29 |
Last sync: | 2024-10-31 10:45 |
Those of you who want to read more can go here: http://theincidentaleconomist.com/wordpress/?p=67113
John Green -- Executive Producer
Stan Muller -- Director, Producer
Aaron Carroll -- Writer
Mark Olsen -- Graphics
http://www.twitter.com/aaronecarroll
http://www.twitter.com/crashcoursestan
http://www.twitter.com/johngreen
http://www.twitter.com/olsenvideo
And the housekeeping:
1) You can support Healthcare Triage on Patreon: http://vid.io/xqXr Every little bit helps make the show better!
2) Check out our Facebook page: http://goo.gl/LnOq5z
3) We still have merchandise available at http://www.hctmerch.com
A number of you have asked for an episode on worm wars. Other of you will have literally no idea what I mean by 'worm wars'. This episode's for both of you. Worm wars are the topic of this week's Healthcare Triage.
(Intro)
It's thought that about 25% of all people on Earth, more than one and a half billion people, have worms infecting them right now. That includes 270 million preschool-age and 600 million school-age children. Many of them get infected by coming into contact with other infected children.
Treating lots of people seems like a good idea. In the late 1990s a huge study was run in Kenya to look at the benefits of a deworming program there. It was a randomised controlled trial, by school, to see how massive deworming treatment improves the lives of children.
Please understand we're not talking about whether or not we should treat infected children. We're talking about whether we should treat everyone regardless of whether they're infected or not to see if outcomes would change. After all, if you treat infected kids, not only do they benefit, but others might by not becoming infected in the first place.
And the original paper found that they did. Not only did other kids get less infected, but school performance improved for all, as well as overall health. Moreover, effects weren't just seen in the schools where treatment occurred. They were also seen in kids in nearby schools who weren't treated. That means that treating kids, which is already pretty cheap, would be massively cost effective.
This was one of the most influential papers published with respect to global public health. People began campaigns to deworm the world. It was touted as a silver bullet; a cheap intervention that could have massive impact on gazillions of children. People acted.
But in recent years, some have begun to question these results. They think the media might have oversold the findings, which the media sorta does all the time, and that the impact of the program wasn't as big as many thought. A recent Cochrane systematic review found that widespread deworming treatment did not have these widespread beneficial effects. This review wasn't the big problem though. The shots that pretty much started worm wars happened a few months earlier with the publication of two reanalyses and replications of the original paper.
This is a good time to pause and talk about replication and reanalysis in general. There's been a huge push in recent years for us to have more transparency in science. Far, far too many results published in papers turn out not to hold up in further work. There has therefore been a push for scientists to allow others access to their data, so that they can make sure that no mistakes have been made. Replication is basically following the same exact steps that the researchers took originally to see if they get the same answer. Reanalysis is taking the original data and seeing if rethought and perhaps better statistical methods would achieve the same results. One paper was a replication and one was a reanalysis. Both papers claimed not to get the same findings, and so began worm wars.
As I said, the first new paper was a pure replication. They took the original data and statistical programming code and looked at whether they matched the findings and found the same results. And it did find some issues. The biggest one was in the way that the original paper measured the long-range effect of deworming a school out to six kilometers. They claimed they measured out all schools to six kilometers, but they had really only measured a subset and it was those schools closest to the three kilometer mark. So when the replication scientists fixed the code, they no longer found the same robust effects all the way out to six kilometers. This is even though they did find similar reductions in worm infections, small improvements in nutritional status, and improved school attendance for those in intervention schools.
Regardless, the big news was the failure to find the so-called spillover effect in neighboring schools. Let me quote from the conclusion: "Re-applying analytical approaches originally used, but correcting various errors, we found little evidence for some previously-reported indirect effects of a deworming intervention". And this caused a firestorm. The media reported that the results were 'debunked' or 'overturned', but not so much. The problem here is that in using statistical language and a strict criteria for distance, they did show that things were no longer statistically significant at six kilometers, but that doesn't mean there's no effect anywhere.
There was an effect. It was seen close to the treated school and was still robust to schools located within three kilometers. It was even robust to four kilometers. Then it started to drop off. The authors of the original paper made a nice figure about this in a response to the replication paper that they released. In what may be the single best piece about worm wars you could read, Michael Clemens and Justin Sandefur proposed a better conclusion for the replication paper. They said, and I'm quoting: "The statistical analysis underlying the original study found there to be a positive spillover effect extending four kilometers away from treated schools. The write-up of the original study incorrectly reported that those spillover effects extended to six kilometers. Aiken et al. correct errors in the original study's computer code. They do successfully replicate the spillover effects at four kilometers, but fail to replicate the spillover effects at six kilometers claimed by the original study. That's way different than saying that the original paper is 'debunked'.
The second paper was a reanalysis. They basically sat down and designed a new way to look at the data. They argued that the analysis that the original paper did was biased in some ways. Also, there were some missing data, which is a real issue. But the missing data issues didn't cause much in the way of changes, so it's a relatively minor problem. And the arguments here are complicated, really, so I'm giving you my best read of the analysis. If someone tells me I'm wrong, I'll add an annotation later. Don't scream at me! I'm gonna do my best. It's also possible for people to reasonably disagree here.
One way they changed the analysis was to make it an 'intention to treat' one. This basically means that you treat people randomised to the treatment group as if they got dewormed, even if they never did. Most times I'd agree with this change. 'Intention to treat' analyses are almost always more robust when you care about effectiveness. But the authors of the original piece argue that it isn't appropriate here because we're not talking about intent. Some schools didn't start getting dewormed until a later date. They therefore shouldn't be 'intervention' until that later date, and that makes sense to me. Treating them as treated before treatment began seems odd.
Another change they made was to split the years of the study into separate groups, and that also seems odd to me. If I run a drug trial over three years, I usually don't consider that three different trials. I mean, why not 36 different month-long trials? Splitting things like that significantly reduces your statistical power. I don't see a compelling reason to think that has to happen here, but again that's me. There were some other changes that they made, but here's the thing: without splitting this study into two, as they did by year, none of them would have made a significant difference. You have to do all of those things to find that the original study reached the wrong conclusion.
Replication is great, and I support it enthusiastically, but there are nuances as I've described in the first study. It wasn't debunked as many people thought. Debating analyses is great too, and I also support that, but no one has a lock on what's correct, and I don't see any reason to think that the new analysis is awesome and the old one is terrible. If you ask me, Aaron Carroll, I think that the original paper holds up pretty well. I believe that its findings are still pretty compelling. But that's not the same things as saying 'deworming the world is worth doing'. That opens up a whole other proverbial can of worms, and requires additional experiments to replicate the findings in other areas. It would also necessitate many other studies. And, as the Cochrane studies showed, these data aren't always obvious.
The bottom line is that 'worm wars'; the fight over the correctness of the original Kenya study, may have been overblown by the media. The stories being published aren't helping, but whether massive widespread deworming is where we should sink limited resources? That's still up for debate. I wish science and policy were easy and clear-cut, but they're often not.
Healthcare Triage is supported in part by viewers like you through patreon.com, a service that allows you to support the show though a monthly donation. We'd especially like to thank our honorary research associates: Cameron Alexander and Qadeem Salehmohamed. Thanks Cameron and Qadeem! If you'd like to support the show, more information can be found at patreon.com/healthcaretriage.