Evaluating capacity development
13 September 2011
Why evaluations seldom satisfy – could we do better?
As capacity development becomes mainstreamed in international development assistance programmes, demand for the systematic evaluation of capacity-development initiatives is growing. Doug Horton explains how the evaluation of capacity development can be improved.
Evaluations now feature prominently in the governance and accountability procedures of virtually all public programmes. These are expected to generate practical information on the results of capacity-development initiatives – and offer lessons about how to improve such initiatives. But while evaluations are now routine, they seldom satisfy either donors or programme managers. Less is being learned from the evaluations than was expected, and the lack of ‘hard evidence’ on the impact of capacity-development programmes may jeopardise future funding.
However, the evaluation of capacity development can be improved by clarifying the focus and purpose of evaluations, expanding professional development and knowledge sharing among evaluators, drawing on systems thinking, and shifting attention from accountability to learning and programme improvement.
High expectations, disappointing results
Evaluations are expected to provide information about how to improve current and future programmes, to measure the results of completed programmes and to help decision makers to choose between competing demands where public resources are scarce.
But there is growing frustration with evaluation processes and results. Programme managers and staff members complain that evaluations are intrusive and burdensome and seldom produce useful results that improve programmes. At best, they see evaluations as costly requirements for doing business with donors – and at worst, they see them as potential threats to their programmes and their jobs.
And officials in development agencies complain that evaluations are not producing the types of ‘ hard evidence’ they need to justify continued funding for capacity-development interventions.
One of the reasons for the frustration with evaluations is that different stakeholder groups expect different things from an evaluation, and the ‘multi-purpose evaluations’ that are carried out to please everybody often fail to satisfy anyone in the end.
Four basic evaluation questions
An evaluation of capacity development can address one or more of the following, basic questions:
- How can a capacity-development process be improved?
- What have the results of the capacity-development process been?
- How can support for capacity development be improved?
- What have the results of external support been?
These four questions reflect the intersection of two different objects and two different purposes of evaluation.
The object of the evaluation
The first two questions focus on the local, or endogenous, process of capacity development that takes place in a particular physical and institutional context. These questions are mainly of interest to local groups with stakes in the capacity-development process. In contrast, the third and fourth questions focus on an external aid programme. These questions are mainly of interest to external groups with stakes in the aid programme.
There is seldom a one-to-one relationship between a local capacity-development effort and an external aid programme. Some capacity-development efforts are associated with a single external support programme, but most local efforts are associated with several external programmes. Similarly, an external support programme may focus its attention on a single local capacity-development process, or it may work with capacity developers in a number of different locations.
So evaluating a capacity-development process that operates in a specific location and institutional setting is quite a different task from evaluating a capacity-development support programme that might be working in a number of locations and operating within its own institutional setting. Assuming that a single evaluation could adequately assess both the local capacity-development effort and an international support programme, using the same framework, methods, and information sources, has led to much confusion and frustration in the field and to many uninspiring evaluation reports that have been shelved and quickly forgotten.
The purpose of the evaluation
What evaluators refer to as a formative evaluation aims to promote learning and to improve an ongoing programme. In contrast, a summative evaluation aims to tally up the results of a mature or completed programme. Irrespective of whether the object of an evaluation is an endogenous capacity-development process or an external support programme, an evaluation may have a formative or summative purpose. It may aim to improve the process or programme in question, or it may seek to measure results or benefits.
Whereas questions 1 and 3 are concerned with learning and improvement, questions 2 and 4 are concerned with measuring results and benefits. Formative evaluations are mainly of interest to programme managers and operators, who may use their results to improve their work. On the other hand, summative evaluations are mainly of interest to external stakeholders (particularly funding agencies and the tax payers or other donors who support them) who may use the results to justify past decisions or to make important decisions concerning future programming. Such decisions may involve the continuation, scaling up or termination of the programme that is being evaluated, or prompt decisions about personnel. This explains the apprehension (or sheer terror) that managers may feel when faced with a summative evaluation of their programme.
One size does not fit all needs. The idea of utilisation-focused evaluation (UFE), as elaborated by Michael Patton, provides a useful general framework for designing and managing evaluations. This framework can be applied to capacity-development processes and support programmes. A UFE is based on the principle that an evaluation should be judged by its usefulness. No matter how technically sound and methodologically elegant an evaluation is, it is not truly a good evaluation unless the findings are used.
Experience shows that learning from experience and using evaluation results to improve programmes are enhanced by the direct participation of programme stakeholders in all aspects of the evaluation. Consequently, professionally facilitated participatory evaluations are ideal for promoting learning and programme improvement.
External stakeholders, who represent tax payers or other funders, expect accountability-oriented evaluations to be carried out by ‘objective’ external evaluators who operate at arm’s length from the programme. Quantitative methods used by measurement specialists are generally preferred to more qualitative ‘soft and fuzzy’ constructivist methods.
To encourage the use of an evaluation’s results, the evaluator should engage those commissioning the evaluation in decisions on evaluation methods, information sources and presentation styles. However, the involvement of stakeholders needs to be managed carefully to prevent the results from being unduly influenced by different interest groups.
The evaluation types that are most appropriate for responding to the four basic evaluation questions can be summarised as follows:
Four types of evaluation depending upon the evaluation object and purpose
The challenges evaluators face
Capacity development presents a number of challenges for evaluators, and these have stymied progress in this area. Five major challenges are especially important:
- Evaluation has been mainstreamed as a tool for accountability, not improvement.
- Capacity-development processes are inherently complex.
- Capacity-development interventions are often badly designed.
- Evaluations are often weak in their design or methods.
- Knowledge sharing and professional development are often limited.
Evaluation for accountability
Although evaluation has become a mainstream and routine administrative procedure, it has not become mainstreamed as a learning tool or a management practice aimed at improving programmes. For this reason, capacity-development efforts are usually evaluated to meet a donor agency’s administrative requirements rather than to provide programme managers or staff members with information that will improve their work.
Using evaluation as an accountability tool has led to an adversarial evaluation culture in which programme managers and operators try to present themselves to external evaluators in the best possible light. External evaluators, in their turn, try to get beyond the rosy picture on the surface of the programme, and uncover its weaknesses and failures. Adversarial evaluations are seldom of much use to either the donor or the programme itself, except to confirm preconceptions or legitimise decisions that were taken before the evaluation.
Anyone who has engaged in capacity-development initiatives or their evaluation quickly realises that capacity-development processes are inherently complex and their results unpredictable. Developing capacity – at the individual, organisational and institutional levels – involves cycles of learning through trial and error, and applying lessons learned in the next cycle of activities. Many factors that are beyond the control of programme managers influence the direction and results of capacity-development efforts. This means that managers need considerable room for manoeuvre, must react to unexpected events and adapt their strategies in response to new challenges and opportunities as they arise.
The complexity of capacity-development processes and the emergence of results from numerous unpredictable influences pose significant challenges for evaluators. This is particularly true for evaluators steeped in linear results-based planning and evaluation frameworks, such as the logical framework. To provide useful services for their clients and to learn new approaches and tools, evaluators need to move beyond such linear frameworks to systems thinking, innovation studies and developmental evaluation.
Weak programme design
Because capacity-development processes cannot be neatly planned and implemented with predictable results, traditional objective and indicator-based models are inappropriate for planning and evaluating capacity-development initiatives. In fact, using such models can undermine capacity development by straitjacketing managers, diverting scarce resources from programme activities to ‘mindless’ monitoring exercises, and discouraging the revision of objectives and strategies based on learning from experience. This is why programmes that develop and implement detailed plans often contribute less to local capacity development than ones that employ more flexible, adaptive management approaches.
Although capacity developers should not invest heavily in detailed, indicator-based plans, it is important that capacity-development interventions have well-thought-out designs. Unfortunately, the planning documents for most interventions – including those containing numerous quantitative indicators for activities, outputs, outcomes and expected impacts – seldom present credible programme theories that are clear about what types of capacity are to be developed, how the programme is expected to work and how it proposes to bring about its results.
In other words, very few capacity-development interventions have design documents that state clearly how the proposed activities are expected to bring about behavioural changes that will ultimately lead to sustainable capabilities for individuals, organisations and broader systems. Having such a theory is essential for programme operators (and evaluators) to learn from experience, by comparing expectations with actual results and reflecting on the differences between them.
All capacity-development interventions are based on theories of some sort. However, they tend to be theories that are implicit in the minds of those who design and implement the interventions, rather than explicit theories in the form of coherent narratives that can be discussed, debated, improved and shared. Because there is seldom a consensus among key stakeholders on the programme theory, various actors may have different – sometimes conflicting – concepts of the programme’s goals and strategies and how its activities are expected to strengthen capacity.
This is not necessarily a bad thing. Complex programmes need to evolve and find their way over time. However, it is useful for programme managers, staff and other key stakeholders to reflect on their theories from time to time and re-examine the changes they are attempting to bring about. In this way, they can understand the various perspectives that might exist, and move in the direction of a consensus.
Reflecting on a programme’s underlying theories and assumptions is especially important when preparing for an evaluation. If an evaluation team arrives and does not find an explicit programme theory, the evaluators may be tempted to introduce their own theory – and so produce yet more confusion and frustration.
Weak evaluation methods
In addition to the inherent complexity of capacity-development processes and weaknesses in the design of capacity-development interventions, the terms of reference for capacity-development evaluations also tend to be weak. Frequently, evaluators are expected to answer several challenging evaluation questions with a single evaluation carried out over a short period of time and with limited resources.
The objectives of an evaluation may include identifying ways to improve the local capacity-development process as well as the external support programme (frequently confusing one with the other), and also to measure the costs and benefits of the capacity-development process as well as the intervention (again confused). As already noted, different types of evaluation are needed to answer these different evaluation questions. Attempting to address all the questions within the same evaluation is a recipe for confusion and frustration during the evaluation process – and generally results in an unfocused, muddled evaluation report that fails to satisfy any of the principle stakeholder groups.
Evaluation designs for capacity-development interventions often call for evaluators to apply a range of qualitative and quantitative methods and conduct an evaluation that is ‘participatory’ while conforming to general evaluation standards such as those issued by the Development Advisory Committee (DAC) of the Organisation for Economic Co-operation and Development (OECD).
In a recent evaluation, the contracting agency requested that the evaluation be ‘participatory’ and should ‘help build capacity’. It also requested that it follow the DAC Quality Standards for Development Evaluation. The problem was that the DAC standards – which were developed for accountability-oriented evaluations conducted by development agencies – emphasise the importance of having an evaluator who is independent of the programme management and policy-making processes. Participatory exercises that actively engage local stakeholders in the evaluation exercise – the essence of learning- and improvement-oriented evaluations – cannot logically comply with such standards.
There is no ideal set of evaluation methods. Evaluators need the freedom and the ability to select methods for collecting and analysing information and for reporting conclusions that are appropriate for each evaluation. These will depend on the evaluation’s purpose, the object of the evaluation, and local circumstances such as the time and resources available.
Limited professional development
The final challenge that has stymied progress in evaluating capacity development is the limited scope of knowledge sharing and professional development opportunities in this area. A number of international development organisations have issued guidelines for evaluating capacity development in their particular areas of interest, and these are available on the internet. However, little information is available about how these guidelines have been used and what results have been obtained.
Few reports on evaluations of capacity-development initiatives are available in the public domain. And what few papers have been published in professional journals, tend to be based on a single evaluation study. There are no textbooks on the evaluation of capacity development, and practically no evaluation training programmes include a module on evaluating capacity development. One recent exception to this general rule is provided by Japan’s Foundation for Advanced Studies on International Development (FASID), which offers a three-day course in evaluating capacity development.
Consequently, the current body of knowledge on evaluating capacity development consists mainly of guidelines and online manuals – the validity and usefulness of which is unknown. Very few ‘exemplary evaluations’ of capacity development are available, and evaluators have very little access to the implicit knowledge possessed by experienced evaluators, which can best be accessed through direct interaction in professional development workshops and in-service training.
Priorities for improvement
The current state of practice in the evaluation of capacity development, and the challenges facing evaluators, suggest five priorities for improving practice in this area:
- Expanding professional development
- Applying concepts and tools from systems thinking and complexity
- Conducting different types of evaluation for different user groups and needs
- Enhancing knowledge sharing among evaluators
- Shifting the emphasis of evaluation from accountability to learning and programme improvement
1. Expanding professional development
The evaluation profession has grown dramatically in recent years. There are now many opportunities for evaluators to continue to develop their knowledge and skills through professional workshops. These are held at conferences of the African Evaluation Association, the American Evaluation Association, the European Evaluation Society and other professional organisations. There are also a number of more intensive training opportunities for development evaluators. However, to date there have been very few opportunities for professional development specifically related to evaluating capacity development. Given the large and growing number of evaluations that are now expected to address issues of capacity development, it is important to expand opportunities for professional development in this area.
Professional development is needed by those who conduct evaluations of capacity-development interventions, and also by those who commission and supervise such evaluations. It is not uncommon to encounter personnel in development agencies whose job it is to manage evaluations, but who have little or no training or practical experience in carrying out evaluations. This is one reason for the poor quality of evaluation design.
2. Applying concepts and tools from systems thinking and complexity
Systems thinking has a great deal to offer to the design, management and evaluation of capacity-development interventions. Bob Williams’s article, Thinking Systematically, which featured in Issue 37 of Capacity.org, is a good starting point for individuals who are unfamiliar with this field. Williams and others have made this one of the most dynamic and productive sub-fields within professional evaluation, and the opportunities for professional development and knowledge sharing are expanding rapidly. Michael Patton’s book, Developmental Evaluation, is a valuable new resource for applying complexity concepts to enhance innovation and evaluation use.
3. Conducting different types of evaluation for different user groups and needs
Perhaps the most concrete and practical way to improve practice in this area is to recognise the need to conduct different types of evaluation for the various types of user groups and purposes. There has been a tendency to confuse the objects of an evaluation (endogenous versus external interventions) and the purposes (formative versus summative). These confusions have often led to misunderstanding and conflict during evaluation processes and to the production of evaluation reports that failed to live up to expectations or satisfy the information needs of the intended users.
Evaluators need to work with evaluation clients and other stakeholders to focus their evaluations on one question at a time. Then they need to select appropriate methods, information and analytical procedures for responding to each question in a way that the intended users will find convincing and useful. The strategy should be to focus the evaluation on a few key questions of interest to intended users, and to work with them throughout the process to ensure the results are used to inform decision making.
It is easy to see why learning- and improvement-oriented evaluations need to be designed and conducted differently from accountability-oriented ones. To be useful, a learning-oriented evaluation needs to engage programme managers, staff and beneficiaries in a participatory exercise – always keeping in mind that the evaluation process is often more important than the report produced.
In contrast, an accountability-oriented evaluation – which seeks to assure external stakeholders that resources have been well used and that the programme has generated significant results – is best carried out by external evaluators (often measurement specialists) who operate at arm’s length from programme personnel and the intended beneficiaries.
It may be less easy to see why an evaluation of a local capacity-development effort needs to be designed differently from an evaluation of an external support programme. However, because local organisations often work with many external aid programmes, evaluating local capacity-development efforts generally needs to look at complex networks of inter-organisational relations and their contributions (positive and negative) to local capacity development.
A key capacity for local organisations is the ability to manage external sources of support effectively. In contrast, an evaluation of an external support programme needs to focus on the objectives, operations, and results of the programme, which may impact on one or more local capacity-development processes. Even in those rare cases where there is a strong one-to-one relationship between an external programme and a local organisation, it has proved useful, and sometimes even essential, to separate evaluations of the support programme from those of the local capacity-development effort.
Over the years, there have been significant advances in the methods available for measuring programme costs and benefits, and these should be employed in summative evaluations of capacity-development processes and interventions. The International Initiative for Impact Evaluation and the Journal of Development Effectiveness are excellent places to obtain information on such methods. One important thing to note is that experimental methods for measuring impact need to be built into the intervention from the beginning rather than being tacked on at the end. In other words, the intervention itself becomes part of the evaluation, rather than the evaluation becoming part of the intervention.
There has also been a great deal of progress in using qualitative and participatory methods in learning- and improvement-oriented evaluations. To cite just a few examples:
- The most significant change method has been widely used to encourage stakeholders to reflect on the results of their work, through the development and review of ‘most significant change’ narratives that have resulted from an intervention.
- Appreciative inquiry is being used in evaluations to help stakeholders to distinguish aspects of interventions that are working from those that are not, and to help them to build on strengths rather than focus on weaknesses.
- Horizontal evaluation is type of participatory evaluation that promotes collective learning and knowledge sharing in the context of a network, and which is especially useful for programmes that are implemented by teams at multiple sites.
4. Enhancing knowledge sharing among evaluators
Knowledge sharing offers an excellent opportunity for improving the evaluation of capacity development. Many evaluators have participated in evaluations of capacity development, but they are hesitant, or lack opportunities, to share their experiences. One reason for their reluctance might be that few evaluators feel proud of their efforts to evaluate capacity development and many feel that their work has been mediocre or their experiences have been negative.
Internal evaluators may face institutional pressures not to wash dirty linen in public, and not to discuss problematic evaluations with colleagues in other organisations. Similarly, external evaluators may feel that openly discussing mediocre or problematic evaluations may hamper future job prospects.
Whatever the reason, it would be useful to create ‘safe spaces’ in which evaluators could share their knowledge and experiences of evaluating capacity development, without fear that sharing this information could have negative repercussions for them or for their organisations. Ideally, such knowledge sharing would take place in professionally facilitated face-to-face workshops, which could be organised within development agencies or for groups of organisations that are committed to learning from experience.
5. Shifting the emphasis of evaluation from accountability to learning and programme improvement
In recent years, ‘impact mania’ and the ‘evidence-based everything’ movements in public management have shifted the emphasis of evaluation from learning and programme improvement to accountability.
The bureaucratisation of evaluation is present throughout the international development community, and is a general problem, not specific to the evaluation of capacity development. Advancing the use of evaluation to improve the effectiveness of capacity development initiatives requires shifting the pendulum back from the side of accountability for accountability’s sake and towards learning and programme improvement.
Acosta, A. and Douthwaite, B. (2005) Appreciate inquiry: An approach for learning and change based on our own best practices. ILAC Brief No. 6. Rome, Institutional Learning and Change Initiative.
Baser, H. & Morgan P. (2008) Capacity, change and performance. ECDPM discussion paper No 59B. Maastricht, the Netherlands, European Centre for Development Policy Management. (www.ecdpm.org/dp59B).
Davies, R. and Dart J. (2005) The “most significant change’ (MSC) technique: A guide to its use. (www.mande.co.uk/docs/MSCGuide.pdf).
Engel, P., Keijzer, N. and Land, T. (2007) A balanced approach to monitoring and evaluating capacity and performance: A proposal for a framework. Discussion Paper 58E. Maastricht, the Netherlands. European Centre for Development Policy Management. (www.ecdpm.org/dp58E).
Horton, D., Alexaki, A., Bennett-Lartey, S., Noële Brice, K., Campilan, D., Carden, F., de Souza Silva, J., Thanh Duong, L., Khadar, I., Maestrey Boza, A., Kayes Muniruzzaman, I., Perez, J., Somarriba Chang, M., Vernooy, R., and Watts, J. (2003) Evaluating Capacity Development: Experiences from Research and Development Organizations around the World. ISNAR, IDRC and CTA. (www.idrc.ca/en/ev-31556-201-1-DO_TOPIC.html). (Also available in French and Spanish.)
Leeuw, F. (2009) "Evaluation: A booming business but is it adding value?" Evaluation Journal of Australasia (9, 1. pp 3-9).
Marguerite Casey Foundation (...) Organizational Capacity Assessment Tool (www.caseygrants.org/pages/resources/resources_downloadassessment.asp).
Patton, M. Q. (2011) Developmental evaluation: Applying Complexity Concepts to Enhance Innovation and Use. New York, The Guilford Press.
Patton, M.Q. and Horton, D. (2009) Utilization-focused Evaluation: An Introduction. Rome, ILAC Brief No. 22. Institutional Learning and Change Initiative, Rome, Italy.
Rogers, P. (2008) "Using programme theory to evaluate complicated and complex aspects of interventions", Evaluation (Vol. 14, No. 1)
Simister, N. with Smith, R. (2010) Monitoring and Evaluating Capacity Building: Is it Really that Difficult? Oxford, INTRAC Praxis paper 23. International NGO Training and Resource Centre.
Thiele, G., Devaux, A., Velasco, C. and Manrique, K. (2006) Horizontal evaluation: Stimulating social learning among peers. ILAC Brief 13. Rome, Institutional Learning and Change Initiative.
About the author
Doug Horton is an independent researcher and evaluator who works on topics related to innovation and capacity development in the context of international development. He earned a PhD in economics from Cornell University (1977) and an MSc degree in agricultural economics from the University of Illinois (1967).
Doug was head of the International Potato Center’s Social Science Department between 1975 and 1990, and from 1990 to 2004, he was a senior officer at the International Service for National Agricultural Research. He has participated in more than 50 evaluations and has published more than 100 articles, books, reviews and research reports in his fields of professional interest.
Search Terms analytical frameworks