Embracing innovative practice

29 October 2010

Many years of experience in the field had led David Watson to question the value of monitoring and evaluation. Recently, a range of innovative to M&E approaches has given him new hope. Here he explains why.

Regrettably, most monitoring and evaluation (M&E), especially of capacity development, has been expensive, inconsequential and futile. At least, that is what my practical experience in planning, designing, managing, monitoring and evaluating development policies and activities had led me to conclude. It rarely seemed to result in effective management responses in the form of changed practices. It was even more rarely accompanied by collective reflection and learning among stakeholders. M&E seemed to be ‘something the donor needed to do’, since donors understandably have to justify development aid expenditures. Too often, the corollary was that national counterparts had limited or no involvement (or interest) in such monitoring and evaluation activities. Arguably, this pattern was inevitable given the resource imbalances and consequent unequal power relationships between donors and recipients in the development ‘business’.

On a more positive note, however, my recent odyssey though the literature, and some remarkable descriptions of innovative, capacity-enhancing M&E practice, has left me feeling much more encouraged. The problems I observed have been widely recognised, and at least some donor agencies and practitioners are addressing them. The articles in this issue of Capacity.org present insights into some of these innovative practices, which hopefully will encourage readers to explore further those approaches to monitoring that themselves stimulate capacity development.

Defining capacity

One of the major challenges in any discussion of capacity-related monitoring is that ‘capacity’ is a poorly understood concept. It is not yet a well defined area of development practice among the various professions involved in development agencies – donors, multilateral development banks and NGOs. Nor is there a generally accepted definition of ‘capacity’ in the literature.

A recent study by the European Centre for Development Policy Management (ECDPM) defined capacity as ‘that emergent combination of attributes, assets, capabilities, and relationships that enables a human system to perform, survive and self-renew’(see box). Based on 18 case studies of organisations and networks around the world, the study concluded that there are multiple dimensions of ‘capacity’. The clear implication is that we need to recognise and acknowledge all of these dimensions in capacity building efforts, and to cater for them in approaches to the M&E of capacity.

Dimensions of capacity

Until now, the enhancement of an organisation’s capacity to perform (to deliver development results) has been seen as the main – often the only – purpose of capacity building efforts. Therefore, results-based management (RBM) approaches, including the ‘log frame’, are often still used to assess the need for, and to design in detail, capacity building projects. These approaches posit a ‘linear’ connection between the various aspects of capacity building initiatives: from the provision of inputs (technical assistance and equipment, for example) to the delivery of outputs (e.g. more able, competent individuals or service units), which, based on certain assumptions, lead to the achievement of purposes (improved service delivery) and ultimately goals (improved health in a population).

[IMAGE size=medium align=left path=var/capacity/storage/images/media/images/cap_06_29/feature_formal_approaches/99753-1-eng-GB/feature_formal_approaches.jpg]

However, the ECDPM study’s scrutiny of the case studies and the literature revealed other capacities that are also clearly crucial in making organisations, networks or sectors successful and sustainable. For example, a successful organisation must have the capability to act, to organise itself, and to influence others. Such an organisation relates productively to other players in the context in which it functions, and has some legitimacy in the eyes of those other players. The organisation’s capability to adapt and self-renew is essential, and is bound up in the ability to master change within itself or among other players, and to adopt new ideas. The case studies also revealed another capability of successful organisations – their ability to achieve a degree of coherence, including the definition and maintenance of core values governing how the organisation operates.

Systems thinking

These abilities, qualities and capacities of successful organisations, networks or systems resonate closely with a body of management literature which, I must confess, I was not familiar with until recently: systems thinking, and complexity theory in particular. These are essentially perspectives rather than all-embracing theories, and are concerned with trying to understand the behaviour of organisations (and individuals) in complex, interactive, ‘messy’ multi-organisational settings, such as the health sector of a country [1]. According to these perspectives, capacity development is a process that is not linear (in contrast with the RBM perspective), but instead tends to be associated with multiple causes, solutions and effects, some of them unintended or essentially unpredictable. The attribution of particular outcomes to particular capacity enhancing measures thus becomes difficult, if not impossible.

In many developing countries, the results of public sector capacity building measures have been unsatisfactory, despite the intensive design efforts and the large volumes of resources devoted to such efforts over several decades. There are clearly difficult institutional and political contextual factors at work in public sector environments [2]. But these challenges do not adequately explain the limited success achieved to date.

Capacity, change and performance

What does capacity look like? How does capacity develop? What are the driving forces behind successful capacity development? Does better capacity necessarily lead to better performance? What can outsiders do to support capacity development? INTERNALLINK Read more...



Recent studies have also indicated significant capacity constraints within development agencies in managing, motivating and resourcing their M&E function, particularly for capacity building [3]. Some observers have begun to explain this poor record with reference to the complex nature of ‘ capacity’, and the apparent importance – revealed in the ECDPM case studies and much of the systems thinking literature – of interaction and interdependency among individuals, work groups and organisations, and of ‘feedback’. Margaret Wheatley’s contributions are well worth reading in this regard [4]. She believes that individual behaviour change can never be induced by measurement, but only happen as a result of personal choices. Desirable behaviour, including committed, quality work, teamwork and learning, are more likely to emerge when people feel a shared sense of what they hope to achieve together.

Capacity, change and performance

It is important to acknowledge, however, that formal RBM approaches to programme design and performance monitoring do have an important role to play, under well defined (but in the public sector, unfortunately rare) conditions. These include cases where an organisation ‘signs up’ voluntarily to accept capacity development support; where stakeholders themselves are able and willing to assess the capacities they need, and can indeed define them unambiguously (this is often easier said than done in the public sector); where there are incentives to improve performance; and where leadership and ‘ownership’ of the organisation are firm and clear. Several ECDPM case studies, such as those describing the Rwanda Revenue Authority and the Philippines–Canada Local Government Support Programme, offer positive examples of where these factors prevailed, and contributed to successful capacity building outcomes using a RBM framework.

Several other ECDPM case studies – of the COEP network (Brazil), the ENACT programme (Jamaica) and the regional organisation IUCN in Asia – provide encouraging insights with regard to monitoring. They illustrate how positive impacts on capacity were achieved where the organisations were encouraged to learn lessons from their own experiences, and evolved approaches to developing their own capacity accordingly, consistent with the ‘systems thinking’ school [5]. Those cases also note how the donors were supportive of the organisations in ways that responded to the uncertainties they faced. The donors in these cases demonstrated flexibility. In one case, the donor abandoned a RBM framework in favour of a more process-oriented approach to monitoring progress and capacity development. In another, the donor hired an NGO not as a source of expert inputs, but simply for its ability to help a network of community development corporations learn from their experiences [6].

Innovative approaches

It is also heartening to read about some of the innovative approaches to the monitoring of capacity development that have been applied over the past few years, many of them by large international NGOs. These approaches include ActionAid’s Accountability, Learning and Planning System (ALPS), Rick Davies’ ‘most significant change’ (MSC) technique and IDRC’s Outcome Mapping [7].

[IMAGE size=small align=left path=var/capacity/storage/images/media/images/cap_06_29/feature_dokter/99781-1-eng-GB/feature_dokter.jpg]

These approaches have characteristics that are aligned with systems thinking. They involve structured interactions among stakeholders. They are not exclusively concerned with quantitative measurement but with creating consensus on what constitute qualitative improvements that will contribute to the broad goals of the systems involved. They are based on day-to-day experiences and emerging themes, rather than on predetermined indicators of progress. They use ‘work stories’ as a means of ‘making sense’ out of what is happening, and what effects are emerging. They tend to demystify ‘M&E’ and allow even the most vulnerable stakeholders or beneficiaries to have a voice in periodic reflection, and thereby actually nurture capacities for critical analysis, debate and decision taking. They also appear to be able to contribute enormously to learning, and thus contribute directly to organisational (and individual) capacity development. These approaches have also been proved to be practical. The Pelican online discussion forum has recently featured syntheses of debates on recent advances in thinking on evaluation, learning and organisational capacity development, with contributions from Christian Aid and Care International [8]. ECDPM has recently compiled an inventory of approaches to M&E of capacity and capacity development [9].

It is difficult to escape the conclusion, however, that in providing capacity enhancing support, donors generally face a dilemma. On the one hand, they need to be able to demonstrate results of their aid programmes to their political masters, and to their national audit agency ‘watchdogs’. Therefore they tend to have to use RBM programme design and management tools. But on the other, the evidence from recent studies and evaluations indicates that these RBM approaches (with some exceptions) are not supportive of the broader definitions of capacity, or of the interactions and learning that will contribute to capacity development in the longer term.

Accountability

Hence, accountability emerges as an important driver in both systems thinking and RBM approaches. It is possible to draw a distinction between ‘exogenous’ and ‘endogenous’ accountability. Exogenous accountability describes what ‘recipient’ governments and organisations have to donor governments or lending agencies, which are in turn driven by their own audit and political accountabilities. Endogenous accountability refers to a system or organisation that is accountable to its own clients, local politicians, members, or users of its services.

The evidence appears to indicate that assistance modalities and the innovative informal monitoring mechanisms which are based on systems thinking tend to be more supportive of endogenous accountability mechanisms. In turn, these mechanisms are more effective in encouraging better performance and ‘ownership’ than the formal, control-oriented RBM monitoring mechanisms that are applied by donors to serve their own ‘exogenous’ accountability, ultimately to the audit bodies in developed countries.

Building capacities to deliver

Thus there are promising indications that approaches to monitoring that encourage stakeholder participation, interaction, self-assessment, critical reflection and, ultimately, collective learning, tend to build capacities to deliver. They also enable organisations to reorganise themselves, to innovate, to adapt, and to relate better to other players. They also encourage organisations to attain greater coherence and bring values to bear in the manner in which they ‘do business’. In other words, these approaches help to build the more ‘rounded’ dimensions of capacity that were displayed by the successful organisations featured in the ECDPM case studies.

There is also evidence that while donors face a critical accountability dilemma in their capacity building programmes, they are progressively becoming more amenable to supporting endogenous accountability and monitoring mechanisms through the innovative approaches to M&E portrayed in this issue of Capacity.org.

Notes

David Watson is the author of several of the cases in the ECDPM study of capacity, change and performance. His theme paper, M&E of Capacity and Capacity Development (Discussion Paper 58B), reviews the literature, and examines innovations in M&E of public sector and NGO capacity and capacity building from both RBM and systems thinking perspectives.

[1] Jake Chapman has provided one of the clearest depictions of systems thinking concepts and their relevance to a complex, highly interdependent system such as the UK National Health Service. He describes the problems that arose when attempting to impose detailed performance targets and monitoring systems. See J. Chapman (2002) Systems Failure: Why governments must learn to think differently, Demos. For a summary of systems thinking concepts and how they relate to capacity and capacity development, see Peter Morgan (2006) The Concept of Capacity (draft), ECDPM.

[2] See World Bank (2005) Capacity Building in Africa: An OED Evaluation of World Bank Support.

[3] See the reports for the World Bank (2005) 2004 Annual Report on Operations Evaluation, and for Sida, Elinor Ostrom et al. (2002) Aid, Incentives and Sustainability (Summary report), Sida Studies in Evaluation 02/01:1.

[4] Margaret Wheatley and M. Kellner-Rogers (1999) ‘What do we measure and why?’ Journal for Strategic Resource Management.

[5] See the ECDPM case studies of the COEP Network (Brazil), the ENACT Programme (Jamaica), and IUCN in Asia.

[6] Jean Horstman reflects on organisational change in Inclusive Aid: Changing Power and Relationships in International Development, L. Groves and R. Hinton, eds, Earthscan 2004, p.51.

[7] For a summary of these approaches, and further references, see David Watson (2006) M&E of Capacity and Capacity Development, Appendix 2. ECDPM Discussion Paper 58B.

[8] Pelican discussion forum.

[9] See ECDPM (2006) Mapping of Approaches towards M&E of Capacity and Capacity Development (draft).

References

T.A. Abma (2003) Learning by telling: storytelling workshops as an organizational learning intervention, Management Learning, 34(2): 221.

ActionAid International (2006) Alps: Accountability, Learning and Planning System.

C. Bennett et al. (2004) The PISA Action Guide: Community-Driven Tools for Data Collection and Decision Making, PACT.

J. Chapman (2002) Systems Failure: Why Governments Must Learn to Think Differently. Demos.

ECDPM (2006) Initial Mapping of Approaches towards M&E of Capacity and Capacity Development (draft).

R. Davies and J. Dart (2005) The ‘Most Significant Change’ (MSC) Technique: A Guide to Its Use, MandE.

A. Hauge (2002) Accountability: to what end? UNDP Development Policy Journal vol.2.

J. Horstman (2004), in L. Groves and R. Hinton, eds, Inclusive Aid: Changing Power and Relationships in International Development, Earthscan, p.51.

P. Morgan (2006) The Concept of Capacity (draft), ECDPM.

E. Ostrom et al. (2002) Aid, Incentives and Sustainability (summary) Sida Studies in Evaluation 02/01.

UNDP (2006) Capacity Assessment Practice Note.

UNDP (2002) Handbook on Monitoring and Evaluating for Results. UNDP Evaluation Office.

M. Wheatley and M. Kellner-Rogers (1999) ‘What do we measure and why?’ J. Strategic Resource Management.

World Bank (2003) World Development Report 2004: Making Services Work for Poor People. Washington, World Bank.

World Bank (2005) Capacity Building in Africa: An OED Evaluation of World Bank Support. Washington, World Bank.

World Bank (2005) 2004 Annual Report on Operations Evaluation. Washington, World Bank Independent Evaluation Group.

David Watson, DFID Accredited Governance Consultant and ECDPM Associate