I seem to be playing catch up in all arenas of my life, it seems, and juggling a few too many tasks on my plate. Thus, this belated post on last week’s CSIS event “Using Data to Drive Better Global Health Impacts“.
Featuring three speakers with distinctly different backgrounds (but a common interest in good monitoring and evaluation data), the panel presentation was insightful and interesting, and questions raised by audience members were quite thought provoking. Powerpoints for each of the three presenters are posted to the CSIS website, including Gina Dallabetta (Gates Foundation, on monitoring data with a programmatic example from India), Paul Bouey (Deputy Global AIDS Commissioner with PEPFAR), and Ruth Levine (Deputy AA in Policy, Planning, and Learning, known from her previous post at CGD).
Of particular interest to the global development community is the proposed evaluation policy presented by Ruth Levine. By outlining all of the ways USAID fails to meet rigorous evaluation standards (or at least standardizing evaluations across programs), she demonstrated the need for consistent language and requirements. I would encourage you to read through her presentation for more detail. The policy is currently available for public comment, and while some of the more controversial pieces – having timed evaluation requirements for projects receiving more than US$XXX (TBD), for example – many simply adhere to solid principles of evaluation science.
Levine also addressed the need for agreement on indicators to measure key components of the Global Health Initiative, including health system strengthening, and was willing to admit that not every piece of the proposed evaluation policy would make it through negotiations to be put in place in January 2011.
In her closing remarks she asked two things of the audience and development community at large. First, for input on the proposed evaluation policy, stating, “If you can fill a room, we will come to hear your feedback.” Second, and possibly the most honest statement of the event, she asked for all of us to give USAID room to fail and to publicly admit and discuss those failures. With the emphasis on positive results and the bias towards sometimes talking in happy anecdotal stories, rather than admitting when something does not work due to a fear of having funding cut, this request was encouraging for those of us who see the value in evaluating and learning from programs that don’t work as well as expected.
I applaud Levine for her candid, honest presentation (and very much enjoyed those by Dallabetta and Bouey), and hope she pushes and stretches the agency to admit failures, continues to identify ways to improve methods and processes, and pushes for positive change. I expected as much from her, after following her previous work at CGD, and will look forward to seeing the final evaluation policy in January 2011.