Latest News

Leading British Medical Journal’s Review Process Assailed

15 Dec 2009

Press Release
JCR Paper
JCR Technical Appendix
PRIO Battle Deaths Data
BMJ Paper
BMJ Technical Appendix
Paper on BMJ Peer Review

An ongoing controversy over global war death estimates moved to a new level this month with the release of a new study by researchers at the Human Security Report Project, the University of London, and Uppsala University’s Conflict Data Program.

"Estimating War Deaths: An Arena of Contestation", which is published in the December issue of the The Journal of Conflict Resolution (JCR), presents a detailed critique of claims made in the influential UK British Medical Journal (BMJ) in 2008 (see Obermeyer, Murray and Gakidou 2008) that received widespread media coverage.

The BMJ press release promoting the Obermeyer et al article stated that, “Globally, war has killed three times more people than previously estimated, and there is no evidence to support claims of a recent decline in war deaths.” But the new JCR study argues that the authors of the BMJ article fail to prove either claim and that their article contains many methodological and factual errors.

The BMJ article targeted a much-cited scholarly paper published in 2005 by Bethany Lacina and Nils Petter Gleditsch of the International Peace Research Institute, Oslo (PRIO) that drew on a wide range of published sources. The PRIO paper had detailed a new dataset that revealed that the annual battle death toll around the world had declined by more than 90 percent between 1946 and 2002.

The BMJ article used relatively new World Health Organization (WHO) population health survey data to estimate violent war deaths, arguing that survey methods provide a more reliable way of estimating these deaths than the methodology used to create the PRIO dataset. But, Michael Spagat, Andrew Mack, Tara Cooper and Joakim Kreutz, authors of the Journal of Conflict Resolution study, argue that using surveys to measure violent war deaths confronts major––and unacknowledged––challenges, and that the article by Obermeyer and colleagues is deeply flawed. Two claims, they note, are particularly problematic:

1. The BMJ article asserts that the PRIO study only captures “… on average a third of the number of deaths estimated from population based surveys.” This claim fails for the following reasons:

  • No conclusions about global patterns of war deaths should have been drawn from the BMJ article's convenience sample of just 13 war-affected countries. The PRIO dataset by contrast, provides estimates for battle deaths for all 202 conflicts that were underway around the world during the period covered by the BMJ article.
  • The 13-country convenience sample is biased. Obermeyer and colleagues had to reject results for 33 countries that WHO surveyed, because the surveys recorded too few deaths for a viable estimate. Yet the PRIO dataset records battle deaths in most of these countries––sometimes in large numbers.
  • For nearly 40% of countries reviewed, the PRIO fatality estimates are higher than the estimates from the WHO surveys. Yet the BMJ authors claim that PRIO estimates are consistently too low.
  • The BMJ authors also fail to compare like with like––their very broad category of “war deaths” includes a far greater range of fatalities than PRIO’s relatively narrow category of “battle deaths.” Treating these categories as equivalent invalidates any comparisons between them.
  • The claim that PRIO extracts its information primarily from “media sources” is used in part to substantiate the claim that the dataset undercounts battle deaths. It is, however, incorrect. The PRIO data are drawn from a very wide variety of sources including scholarly research and official reports.
  • The recall period for the WHO surveys––up to 40 years––was far in excess of recommended practice.

2. The assertion that “…there is no evidence to support a recent decline in war deaths,” fails because:

  • The BMJ study covers a much shorter time period than that covered by the PRIO dataset––again like is not being compared with like. In the BMJ article the period covered is from 1955 to 1994, which misses both the pre-1955 battle death high point and the post-1994 low points recorded in the PRIO battle death dataset. The most recent three-year update of the PRIO dataset confirms the long-term decline in battle deaths, although there has a small increase since 2001.
  • The BMJ authors’ extrapolated time trends are derived from their biased convenience sample of just 13 countries. They differ from those in the PRIO dataset solely because of the impact of an estimated constant that is statistically insignificant. More appropriate estimation procedures would have preserved the downward trend found in the PRIO battle death dataset.

The authors of the new Journal of Conflict Resolution study argue that there is little support for the claim that surveys are the gold standard for measuring violent conflict deaths. First, no research has ever independently validated the accuracy of nationwide estimates of violent conflict deaths derived from surveys. Second, in several cases where different surveys have been carried out in the same war-affected country over similar periods of time, nationwide war death estimates have been radically different.

The online Appendix to the Journal of Conflict Resolution study further argues that small surveys are inappropriate instruments for measuring violent deaths, because most civil wars today tend to concentrate in a few geographically localized areas. In these circumstances cluster surveys tend either to fail to detect any war deaths or––when they do––overestimate their impact by a wide margin.

The Appendix also reveals the extent to which the WHO surveys repeatedly fail to detect violent war deaths in countries where the PRIO dataset––which supposedly undercounts fatalities––reports many battle deaths. In the case of the Philippines, for example, the WHO surveys failed to find any violent war deaths during two periods in which the PRIO dataset reports at least 70,000 battle deaths.

The article by Obermeyer and colleagues presented a critique of the PRIO dataset in a high-visibility journal that unwarrantably faulted the scholarship of the PRIO researchers who produced it. Yet the article signally failed to substantiate any of its major criticisms, while containing serious methodological and factual errors.

« All news