Sunday, 10 February 2013

Measuring time with a yardstick?

Much of my work on monitoring and evaluation is with organisations that work on human rights and governance. It is harder for them to show they make a difference than for people who build roads or wells. Building a road is complicated, but the builders have reasonable control over the process and at the end they can say, "we have built a 50-kilometre road". Human rights and governance work is more indirect: it is not the intervention of  human rights organisation "X" that frees a prisoner - it is a prison guard, the last piece in a huge puzzle of actors and actions. It is not campaign Y that ends domestic violence in a woman's life - it is herself, when she leaves an abusive relationship, or the abusive partner when he stops battering, for reasons that are far beyond the reach of campaign "Y". And you can't blame, say, Amnesty International if the US government fails to close down the Guantánamo detention camp.

The highly commendable DFID Working Paper 38 Broadening the Range of Designs for Impact Evaluations (2012) uses the term "contributory causes" to designate initiatives that take many "helping factors" to reap success. Such causes cannot be adequately represented by log-frame type "input > output > impact" cascades, which are poised on the assumption that a single strand of activities controlled by a small number of actors produces the desired change. Plenty of people have written about such complexity issues (try, for example, Ben Ramalingan's blog Aid on the Edge of Chaos). There is a growing consensus that it takes rich sets of methods and tools to make informed judgements as to how change happens in such complex set-ups. By the way, DFID paper 38 shows a few interesting avenues in that direction.

But for some reason many evaluators still seem to believe that impact is simply achieved by setting goals "in terms of improving a specific number of people's lives, and then [basing the organisation's] decisions in project planning around those numerical targets". (This is an original quote from a report by a reputed international consultancy firm, in its evaluation of a world-wide movement of rights activists.) I find that worrying. One cannot measure the impact of complex interventions by counting people, just like one cannot measure time with a yardstick. Forcing activitists and their movements (often capable of generating ideas that are way beyond the average evaluator's imagination) into a single, linear pattern of thinking and working can be destructive. I doubt we will improve this world by admonishing social innovators to stop doing things that cannot be counted. Better start by trying to understand what activists are doing and why they are doing it - for example, through participatory "sense-making" exercises or other forms of qualitative research, and then define together (i) whether it is really necessary to measure "impact" (rather than the more immediate changes the intervention may bring about), and (ii) if "impact" must be measured, how best to capture it over the years in a way that makes sense with the specific intervention. Everything else risks being extremely expensive (multi-year academic research) or tokenist (i.e. an unjustifiable waste of resources).

1 comment:

Sirajul Islam said...

Good post! It's reflected your experience and the problems you've encountered while performing evaluations. As you know, it is widely accepted that a key requirement for robust evaluation of both implementation and outcomes is that evaluators should be intellectually and practically independent of those who deliver the programme. There is evidence from the international literature also that various forms of self-evaluation, e.g. ‘action research’ can be helpful in promoting learning and reflective practice at the front line. However, local involvement and participatory research is not a substitute for independent scientific evaluation and effective programmes develop an appropriate combination of internal and external processes, with the latter being an ethical imperative when significant public or donor money is involved and large numbers of people are exposed to the untested effects of the programme. Experience suggests that both implementors and communities can and should be productively involved in all types of evaluation to ensure that there is local ‘buy-in’ and that external researchers do not overlook key issues that may affect the results or the interpretation of results. An important message is that monitoring, evaluation and feedback processes are of particular value when they contribute to learning and development in programmes. Mechanisms and tools (including standards and benchmarks) for ‘quality control’ of ground work are highly developed by successful programmes to ensure that work stays close to the agreed objectives of the programme or service and conforms to principles of effective delivery (in as far as these are clear). Documentation or ‘manualisation’ of what, precisely, the programme and its constituent services or activities consist of is likely to be a key principle of effective practice since without it, monitoring and evaluation cannot take place and replication of successful approaches is thus prevented.