On the Oxfam Blog ‘From Poverty to Power’, guest author Graham Teskey (Governance lead at ABT Associates) discussed ‘What makes Adaptive Management actually work in practice?’. Within an insightful discussion of ‘working flexibly’, Graham’s point, that the bundling of Monitoring Evaluation Learning as MEL isn’t helpful, rings true for me. MEL is becoming a preferred approach to understanding the performance of development activities. To my mind learning is the newcomer to the MEL model, added to provide impetus to the existing objective of making use of conclusive findings after the M&E process, and to the new objective of making use of provisional findings during the M&E process. This reflects a level of dissatisfaction with the impact of M&E investments on current and future planning and strategy.

So, on the surface this bundling sounds like a good thing? We have robust methods for monitoring and evaluation and by adding learning we create a more useful performance assessment approach. But is there a danger that by adding the L to M&E we have somehow missed the opportunity to more critically reflect on the challenges of traditional M&E approaches? This is especially the case when at the same time, evaluation itself is becoming increasingly complicated, alien, and costly. For example, through drives for ever greater analytical rigour and external validity through the application of Theory of Change and Randomised Controlled Trial approaches.

It would seem that what is happening is that learning is being adopted as a cure all for the uptake shortcomings of M&E, whatever the methods used. A more critical perspective would ask to what extent the practice and culture of mainstream M&E supports or hinders individual, group, and social learning?

Perhaps the opportunity at hand is as Graham suggests, to separate out evaluation from monitoring and see how a learning approach to monitoring could better support adaptive management during implementation.

Monitoring, let’s face it, is not very fashionable. Isn’t it about accounting for inputs and outputs using simple and largely quantitative measures (costs, units, throughput, etc)? Compared to evaluation, there are few career paths or awards for monitoring professionals. This may be the case, but when we look at what the implementers of development activities do and care most about themselves within the scope of MEL, it is monitoring. That’s because it is most often driven by internal and positive incentives (what is going on right now that I need to know?) and used most rapidly in management and real time decision making. Evaluation is often by contrast seen as being driven by external or at least independent actors, for accountability to directors and funders, with low internal incentives and sluggish use in management.

So, what if monitoring could get a much needed boost of interest and innovation and be seen as a good thing in itself for adaptive management, planning and strategic thinking, and much less from the evaluation perspective as primarily a source of data for more sophisticated methods of analysis? Moreover, might monitoring practice re-invigorated by organisational learning methods be more appropriate for the kinds of development programming that are becoming mainstream (innovation, challenge funds, global funds, etc)?

To put it another way, are not design and management approaches like Problem Driven Iterative Adaptation, Prototyping, Thinking and Working Politically, Fail Fast, Agile, etc not pointing to the need for more learning orientated evaluation practice, but rather challenging the whole idea that development activities are something that should be conceptualised or (as is often the case) designed as objects that are evaluable. There is I think too much of the evaluation tail wagging the design dog in development, with the content and process of activities being shaped by what can be rigorously and validly analysed.

Here are seven features we might look for in a re-invigorated approach to monitoring:

  1. Check for emergent inputs and outputs (outside of the design scope)
  2. Bias evidence capture toward a focus on outliers and over performers
  3. Look for patterns and holes
  4. Analyse evidence streams in real time
  5. Talk about findings continuously
  6. Iterate the design as events emerge
  7. Work out loud to encourage others to amplify understanding