Can we genuinely learn from evaluations?
September 13, 2011 - Heinz Greijn (Editor-in-Chief, Capacity.org)
The concept of capacity development with its in-built appreciation of endogenous systems and ownership is at the heart of the principles of the Paris Declaration on Aid Effectiveness. As developing countries increasingly take control over their own capacity development, donors and international development agencies need to respect their leadership and take on a much more supportive role than they have in the past. However, not all external partners can easily adjust to this new role. The contributions in Issue 43 of Capacity.org illustrate that those who do can make a vital contribution in supporting endogenous development efforts.
However, telling positive stories is not sufficient to help the aid sector solve its accountability problems. Donors need to measure and quantify capacity development initiatives in order to demonstrate that aid work to tax payers who are not in a position experience the benefits themselves. The reality though is that real advances in capacity are context-specific and not easy to predict. They cannot be conveniently measured using simple result chains and performance targets. As Doug Horton explains in the lead article on evaluating capacity development , the pressure on practitioners to generate quantifiable data about results is mounting. Evaluations are becoming increasingly bureaucratic exercises that have little impact in terms of improving the practice of and support of capacity development processes.
Without disputing the need for evaluations as a tool for enhancing accountability, Doug's article makes a case for more and better evaluations aimed at learning and programme development.
I agree with Doug's conclusion. However, as every development practitioner and project manager knows, even learning-oriented evaluations will yield documentation that will sooner or later feed into external and accountability-focused evaluation. This realisation alone is likely to affect the learning process. The question I would like to put out there is: “How can evaluations focused on learning be organised in a way that the learning is not compromised by accountability considerations?”