Today the Center for Global Development hosted a brown bag lunch session analyzing the data collection methodology and data quality of the 2010 Afghanistan Mortality Survey (AMS). The survey was the first to generate direct estimates of both child and adult mortality in Afghanistan; previous health survey data was limited at best, particularly before 2001.
When first released, the survey results were exciting for the international community: they showed that both child and maternal mortality were lower than previous estimates had indicated, and the country had shown improvements in many health indicators. When analyzed with a fine-toothed comb, though, serious questions about data quality, particularly the reliability of the numbers, arose.
Examining the data more closely: Presenter Kenneth Hill, a professor at the Harvard School of Public Health, discussed issues with both the under-five mortality and maternal mortality statistics. The survey indicated the under-five mortality rate, according to household death estimates, was 84 deaths per 1,000 live births, while data calculated based on pregnancy history collected indicated 71 deaths per 1,000 live births. This, in itself, was a surprise, as it is often expected that household data would be lower than under-five mortality calculated from a pregnancy history. Both numbers were markedly lower than expected though.
Professor Hill raised four key concerns around the data, which he manipulated and re-analyzed in various ways in an effort to establish the validity of the estimates:
- The trend in under-five mortality over time in the South (one of three regions where data was collected, considered the most conflict-riddled) was implausible.
- Sex ratios at birth (number of male infants per 100 females) were markedly skewed in some regions. One would expect 102-107 males per 100 females born; sex ratios in multiple regions exceeded 110, and were as high as 138 in the South. This points to the possibility of purposeful omission of mentioning female children during the questionnaire, or sex-selective abortion (which is far less likely in Afghanistan then the possibility of omission).
- The data showed severe underreporting of neonatal deaths relative to the data available from the regional DHS. Comparing the neonatal mortality rate to the post-neonatal mortality rate, the proportion of neonatal deaths is far lower than expected (see graph below).
- Finally, of the interviewers who had conducted at least 50 interviews (a reasonable sample size), not a single one in the North or Central regions reported zero child deaths, while a majority of interviewers in the South reported that households had zero child deaths, raising suspicion about interviewer bias or respondent bias.
What next then? The methods used to collect the data were considered gold standard, best practices for implementing a household survey. While the Afghanistan Public Health Institute and Central Statistics Office took the lead on the task, demographic and health survey gurus from ICF Macro provided technical assistance along the way. There are many challenges in conducting a national household survey, particularly in conflict areas, and we shouldn’t shy away from questioning results and methods when something seems too good to be true.
The take home message from Ken and the panelists: data collection in a conflict setting is rough, and doesn’t typically go entirely according to plan. The inconsistencies and issues with the data particularly from the South, considered the least stable of the three regions where the survey was conducted, drive that point home. These challenges point to the importance of vital registration, noted one attendee, and additional research on how to support and promote institutionalization of vital registration in the various countries where we work would be interesting and useful.