Winning the war for Iraq’s dead
Counting the dead in war zones is what epidemiologist Gilbert Burnham and his team do for a living. But last year when they said 600,000-plus Iraqis had been killed in the war, the US, UK and Iraqi governments furiously attacked their figures for being far too high – though it turns out that UK experts agree with Burnham. Celeste Biever caught up with him recently
BY: Celeste Biever
There were already estimates of the dead in Iraq. Why did you decide to go ahead with your survey?
Our intentions were not political. Our centre is for refugee and disaster studies and this is simply the kind of thing we do. Other counts, such as the Iraq Body Count, which consists of volunteer academics and activists based in the UK and the US, rely on reports of deaths in the English-language press, but the press is in the business of producing news, not statistics. The IBC uses news reports mainly written in English, by people who can’t leave a very narrow area of Baghdad, while violence is worse in the Al Anbar and Diyala provinces. Mortuaries provide figures but a lot of bodies don’t make it there. Also press accounts and mortuary numbers record violent deaths, but people die in war from many causes.
Your figure was an order of magnitude higher than the IBC’s. Why should we trust your method?
Because it’s probably the best one for measuring the burden of conflict on the population. It’s used worldwide: in the Congo, in Banda Aceh after the Asian tsunami, for mortality data in Darfur, Angola and Uganda. And one of my former students, Paul Spiegel, used the technique to measure Serb activities against the Albanians in Kosovo. It was used as evidence in the trial of Slobodan Milosevic.
Why do you think your survey has been criticised?
These are unpleasant results, and they are associated with a war that has seriously divided the countries participating. Some people felt that we were not supporting the troops and were unpatriotic. I am not angry about that. As malicious as some of the hate mail I received is, I can see their point of view because I was in the military, in a combat unit in Korea during Vietnam. These soldiers in Iraq are volunteers, by and large, with good intentions, and they find themselves in a very difficult environment. As epidemiologists, we can produce the numbers, a good explanation for our methods and even a pretty strong statement on what they mean, but getting them accepted in policy circles and in people’s thinking takes time and is often difficult.
How did it feel to have the president attack you?
It’s not surprising to get criticism from people closely identified with the war. On the other hand, public health research often sends people to sleep so it was gratifying in an odd way to be associated with research that grabbed attention, especially heads of state.
You’ve said you will release the raw data to scientific groups who apply, “scrubbed” of the neighbourhoods where it was collected to avoid identifying the interviewees. Will this help?
I don’t know. Much of the criticism is based on unhappiness with the results. A repeat analysis won’t turn the figure from 600,000 to 60,000. Our intent is to be more transparent. We believe we will see numbers that are fairly consistent with ours. I received a lot of supportive emails from people who admired the courage of the team so I think many people already believe our figure.
What was your methodology?
We did a “cluster” survey, where we divided the country into clusters, picked a certain number from each province at random and sent Iraqi doctors to knock on doors in those clusters to ask how many people in each household (who had lived there at least three months) had died from any cause. We used that to produce a death rate for the clusters and then for the population of the whole country. The key is to try to be sure that you talk to enough people and you don’t have biases in selection.
How did Iraqis react to the interviewers?
The general sense the interviewers had was that people were happy to talk to them. They felt gratified that someone was asking and were eager to talk about their experiences.
Could you trust what people told interviewers?
People might forget exact dates, but death is a big event, so they don’t tend to forget that. To double-check, our interviewers asked for death certificates. Ninety per cent of the people who were asked produced them. Of course, a certificate doesn’t stop you hiding deaths: one could imagine households might be reluctant to mention it if someone got killed while involved in criminal activity or sectarian violence. But then the result would be an underestimate, not an overestimate.
Were there things you wanted to know you couldn’t get the interviewers to ask?
We were afraid, for example, to ask how people died as it might have made us look like we were representing a group looking for targets – and that could have endangered the interviewers. We also did not distinguish non-combatants from active combatants because asking that question was far too sensitive.
Were there other limits on your methodology?
Concern for the safety of our interviewers helped determine survey design. Coming up with a death estimate per governorate would have been the best but it would have required more clusters, and since each cluster has a risk associated with it we opted for a national figure. Also, we couldn’t use GPS devices as we had in 2004, where we randomly selected a GPS coordinate in each cluster and used that house as a start point. With more car bombs set off remotely, the team was concerned that if they were spotted holding a GPS receiver their life expectancy would be fairly short.
What did you do instead?
We went back to what we did before GPS. The interviewers wrote the principal streets in a cluster on pieces of paper and randomly selected one. They walked down that street, wrote down the surrounding residential streets and randomly picked one. Finally, they walked down the selected street, numbered the houses and used a random number table to pick one. That was our starting house, and the interviewers knocked on doors until they’d surveyed 40 households. It was more complicated than using GPS but not inferior: the results were very close to the GPS survey. The team took care to destroy the pieces of paper which could have identified households if interviewers were searched at checkpoints.
Why didn’t you accompany the interviewers?
I don’t speak Arabic and I don’t look like an Iraqi, so my chances of surviving very long were not strong. Our Iraqi colleagues said it would endanger them too. We met in Jordan to design the survey, at the end to begin analysis, and kept email and phone contact during.
Were the interviewers willing to risk their lives?
They knew there was a risk. Some dropped out before we started, but once we started, everybody stuck it out. There was a strong feeling of professionalism, and I take my hat off to them. These were the most courageous people in the whole operation. The rest of us took flak for the survey, but that’s nothing compared to their courage. We were very worried about someone dying. We took all the safeguards we could, and I tracked what was happening very closely throughout the three months it took. I remember the day word came back we had finished the last cluster and all eight interviewers, their supervisor and drivers were back safely. I was just elated.
Gilbert M. Burnham trained as a doctor, then went on to manage health services and oversee research in Zambia and Malawi for 15 years. He is now co-director of the Center for Refugee and Disaster Response at the Johns Hopkins Bloomberg School of Public Health in Baltimore, Maryland.