Tuesday 30 June 2009

A plea for quality in M&E

Like the previous entry, this post is -remotely- inspired by Philipp Mayring's handbook on qualitative content analysis (in German, 10th edition 2008). Mayring is a professor of psychology --- now don't run away! Psychology can teach us a lot about assessing fuzzy development processes.
As shown in the post below, any scientific analysis rests on qualitative steps which determine what is important, how the "what" should be measured, and how the measurements should be interpreted. These steps are taken by researchers, i.e. by common mortals. There is no absolute truth (leaving aside religious beliefs) - there are only theories. Even theories that come with figures are just theories, to be confirmed or refuted in subsequent rounds of research.

When researchers do not explain what assumptions and decisions underly their data-gathering and analysis, they are easily challenged. A recent posting on Duncan Green's blog describes how Oxford economics professor Paul Collier, in his latest book War, Guns and Votes, mixes and matches statistics to produce amazing guesses. They are just guesses: to his credit, Collier admits that. But a sentence of the type "an annual expenditure of $100m on peacekeepers reduces the cumulative ten-year risk of reversion to conflict very substantially from about 38% to 17%" does suggest a direct cause-to-effect connection - while it's just a wild guess, a bold simplification of extremely complex realities, a provocative entry point for a discussion.

"Hard" figures that cannot be verified by a close exam of the ways in which they have been collected, selected and correlated are just numbers; they don't prove anything. Unfortunately, monitoring and evaluation in development programmes, meant not just to provoke discussion, but to verify progress and draw learning for the people involved, is often influenced by "naive" number worship. Where "what can be counted" receives more attention than "what matters most and how can we best find out", we may end up with "placebo" indicators. "Placebo" because they may make us feel better and they may placate donors for a while, but they don't cure ignorance.

No comments:

Post a Comment