The AP has an article on professional reaction to the new Iraq mortality study. Not one statistics expert finds a major flaw, though one feels that the confidence intervals were too narrow. It would be nice toget more detail on his argument here. But, in general, the response from edidemiologists and statisticians is very positive.
everal biostatisticians and survey experts were supportive of the work.
“Given the conditions (in Iraq), it’s actually quite a remarkable effort,” said Steve Heeringa, director of the statistical design group at the Institute for Social Research at the University of Michigan.
“I can’t imagine them doing much more in a much more rigorous fashion.”
He said the study made “minor departures” from the standards generally used in national surveys for choosing what households to interview. Whether those departures, brought on by wartime conditions in Iraq, introduced a bias in the results is impossible to measure from the data alone, he said.
Frank Harrell Jr., chair of the biostatistics department at Vanderbilt University, called the study design solid and said it included “rigorous, well-justified analysis of the data.”
And Richard Brennan, head of health programs at the New York-Based International Rescue Committee, said the study’s survey approach was typical.
“This is the most practical and appropriate methodology for sampling that we have in humanitarian conflict zones,” said Brennan, whose group has conducted similar projects in Kosovo, Uganda and Congo.
“While the results of this survey may startle people, it’s hard to argue with the methodology at this point.”
Donald Berry, chairman of the statistics department at the University of Texas’ M.D. Anderson Cancer Center in Houston, said he believes the study was done “in a reasonable way.” But he said the range of uncertainty given for the estimates was much too narrow, because of potential statistical biases in the survey.
While it’s impossible to calculate a better range that accounts for that, he said, it wouldn’t be surprising if the low end dropped about four-fold to 100,000 deaths. A wider range of uncertainty would make the 655,000 figure less meaningful, he said.
Meanwhile, one of the study’s authors said he’s confident in the work’s conclusions.
If the estimate seems high, it’s because the door-to-door survey turned up deaths that are typically overlooked when sought by other means in wartime situations, said Les Roberts, who was with Johns Hopkins when he co-authored the study but has just taken a post at Columbia University.
As for extrapolating a nationwide figure from the sample of the few hundred deaths actually reported, “almost every statistic you’ve ever heard about health in America comes from a sample,” Roberts said. “It may not be extremely precise, but at least it gets us in the right ballpark.”
The lone professional serious critic is Michael E. O’Hanlon of the Brookings Institution who says “I have a hard time seeing how all the direct evidence could be that far off … therefore I think the survey data is probably what’s wrong.” Unfortunately, he doesn’t respond to the Lancet authors’ claim that all prior experience in war is that passive surveillance methods radically underestimate mortality. My own reading of the literature supports this. Further, the Lancet authors claim that Iraq’s system underreported mortality by about 2/3 before the war threw everything into chaos. It seems to me that critics who are relying on passive surveillance need to respond to this point.
October 12th, 2006