« Why Isn't Michael Barone Poor? | Main | Penmanship »

October 16, 2006

More On The Lancet Study

Quick note on The Lancet which, in blogosphere time, may already be old and musty. When Megan McArdle questioned The Lancet study's authenticity (and I think she was the most valid and convincing of the skeptics), she did so on the back of an editor's note admitting that sample error may have occurred because, among other things, many communities were too dangerous for return questioning, families with combatants could have hidden deaths, and many families possibly underreported infant deaths. Two provinces, due to a miscommunication, were left out of the sample, and so no excess deaths were counted. Some families may have been totally obliterated -- a bomb landing on their house, say -- and so wouldn't have reported any deaths. And migratory patterns between the population survey and the sampling could have over-represented high mortality areas.

Reacting to all this, Megan says "if you can't take a good sample, which these guys pretty clearly couldn't, it doesn't matter how faithfully you run the regressions on the crap you managed to collect." Fair enough. Except just about all these biases would undercount the death rate. It's not that they have an poor sample biasing the data in unpredictable ways, but that they have bias pointing in a discernible direction. If, like Megan, you believe the estimate is too high, this doesn't much help your case.

And if you really want to get deep in the weeds on this, The Lancet has a podcast with one of the study authors defending the methodology.

Update From Comments: On the other hand, SomeCallMeTim points to this guy, an actual an expert, who figures the study probably offered a good "upper-bound" estimate. "Population-analysis sampling based techniques like this do tend to produce larger numbers than other analyses, but over the long term, while the sampling techniques tend to over-estimate, those higher numbers have tended to be quite a bit closer to the truth than the lower numbers generated by other techniques."

October 16, 2006 | Permalink


I'm not sure that all biases are towards an undercount. The guy at Good Math/Bad Math, who seems to know what he's talking about, says that these models tend to overcount, but that they also tend to be much more accurate than any other method.

The larger point is this: this appears to be the standard method used to estimate casualties in civil wars and natural disasters. Either its use is pointless in all cases, it's pointless in this case for special reasons that exclude Iraq as a subject from the set of all civil wars, or the researchers have acted in bad faith. People like Galt need to choose one of the three. And if they're not willing to choose one of the three, they need to STFU. You'd think the last five years might have taught them the wisdom in doing that, but I suppose, like their president, they're not big on a accountability.

Posted by: SomeCallMeTim | Oct 16, 2006 9:13:06 AM

"I think she was the most valid and convincing of the skeptics"

Jane Galt is always the most valid and convincing representative of folks holding utterly indefensible opinions.

Posted by: Petey | Oct 16, 2006 9:46:49 AM

Heh -- that strikes me as a perfect description. I met her this weekend and we had a very enjoyable row over union decline and the service economy. She delivers unsound views with a force and cogency I've rarely seen matched.

Posted by: Ezra | Oct 16, 2006 10:18:05 AM

Shes one of the people who think because they are smart they are correct. She usually won't directly address your arguments, and relies almost exclusivly on what I call poor mans refutation where calling into question a minor conclusion of an argument somehow refutes the entire premise and also completely validates hers opinion.

Posted by: mickslam | Oct 16, 2006 10:41:55 AM

Galt's University of Chicago teaches Nobel quality bullshit.

Posted by: bob mcmanus | Oct 16, 2006 10:46:17 AM

"She delivers unsound views with a force and cogency I've rarely seen matched."

Yup. She's a gift with language (before doing the MBA she tried to go the English-academic route), but anything quantitative and it's time for a bleg. Plus, as folks have noted, her economic education at U.Chicago is rooted in theories about 25 years out of date. We understand a lot more about imperfect markets than we did in the early 1980s, thank you very much.

So, someone who's a good quant guy and good with a turn of phrase (like Daniel Davies of crooked timber) can just shred her.

Posted by: Urinated State of America | Oct 16, 2006 11:49:14 AM

"Yup. She's a gift with language (before doing the MBA she tried to go the English-academic route), but anything quantitative and it's time for a bleg."

Oh yeah, and then there's the appeal to authority. She once did a climate-change post riffing on a Stephen Den Beste post on climate change technologies that was so bad it wasn't even wrong. I spent almost a year researching and working up bottom-up estimated economics of climate change technologies, and corrected her, and her reaction was to say that my estimates of the cost of CO2 mitigation didn't match those of (unnamed) authorities she'd talked to. Bleh.

Posted by: Urinated State of America | Oct 16, 2006 11:53:24 AM

McArdle's complaint about the study published in the Lancet doesn't make much sense. Regarding "many communities were too dangerous for return questioning," anyone who knows anything about sampling (much less cluster sampling) would know that returning to the communities is irrelevant: the relevant issue is that the clusters in each survey be selected at random.

"[A]migratory patterns between the population survey and the sampling could have over-represented high mortality areas" makes no sense, since the paper specifically mentions that migratory patterns were taken into account. Did she read the paper? Doubtful.

The other issues "families with combatants could have hidden deaths", "many families possibly underreported infant deaths" and "[s}ome families may have been totally obliterated -- a bomb landing on their house, say -- and so wouldn't have reported any deaths" would have served to depress the number of deaths reported to the survey, so the results would have been higher than reported by the survey.

Perhaps Ms. McArdle really should learn to critically analyze what she writes, instead of just bloviating.

BTW, if you would like a statistics expert's view on the survey, take a look at Tim Lambert's blog.

Posted by: raj | Oct 16, 2006 1:45:43 PM

Lambert is a computer scientist. Like me, he uses statistics in hiw work and probably got an A in his Stats classes, but that doesn't make him a "statistics expert" any more than John Lott's ability to cherrypick makes him an "econometrics expert" -- and Lott even taught the subject for a while.

Full disclosure: I like Lambert's pro-science, pro-facts, pro-math blogging a lot. Just keep your facts straight. His field is CS.

In re MM aka 'Galt' I continue to be baffled that folks find her convincing. Perhaps it was having been a very, very good high-school debater: as an erstwhile semipro shoveler of rhetorical manure, I can spot other practitioners a mile away. She does all the things on comments threads I used to do to win plastic trophies at age sixteen. Walk through the verbiage and pin her down on data and her argument blows off into the distance like ricepaper.

On cluster sampling, it's been my nonprofessional opinion that the real risk of the technique tends to be undercounting. Chu-Carroll is careful not to say the technique itself establishes an upper bound, but that it is his "guess". He mentions that such techniques estimate higher numbers *than other techniques*, not that the true numbers, to which they tend to be closer in the end.

Anyone have a nice, objective, non-Iraq-focused discussion -- say from the known academic authority on the subject, or from someone like the Census Bureau? My bet is that it says what I expect it to, but unlike MM/Galt, I am open to being convinced by the facts.

Posted by: wcw | Oct 16, 2006 3:33:58 PM

I think people are being a little harsh about Galt. All lay people work with the network of "people who know more" to which they have maximum exposure when discussing the stuff they have no business discussing. Our network, I suspect, tends to be better by virtue of the academics. (By way of analogy, our network tends towards the NYT/WaPo end of things, and hers towards the NYPost/WaTimes end of things.) I think she's likely to be wrong, and after five years of being wrong a lot, she might wonder why she's been wrong so often, etc. But I'd be surprised if there's much evidence that she's being willfully disingenuous here.

wcw: In comments, Chu-Carroll says that this method tends to produce estimates larger than the actual events. I didn't mean to imply that he was an expert in this area, just that he seemed careful, thoughtful, and familiar with the material.

Finally, Healy linked to the comments of an actual expert (stats professor), but his comments were limited and slightly technical, with no opinion on the paper itself.

Posted by: SomeCallMeTim | Oct 16, 2006 9:18:51 PM

Healy? Did I miss a comment somewhere?

Quite right on C-C; I didn't scroll to the comments. I'll accede to his experience with the method's application in past wars and disasters. I had been working off my memory of what texts say; viz the intuitive discussion of minefields by Davies (scroll down a bit for it), discussing the original paper.

Posted by: wcw | Oct 17, 2006 11:43:38 AM

Megan's critique started with (not an exact quote) "I don't believe it; those numbers are too big", and descended from there. She never gave a reason that 600K deaths in three years was too big, while the Lancet article itself referenced some recent civil wars with multi-hundred thousand death tolls, as well as the fact that the media accounts are underestimates.

It's not that she's less numerate than she thinks, it's that she's a right-wing propagandist hack.

Posted by: Barry | Oct 17, 2006 1:47:29 PM

The comments to this entry are closed.