Tuesday, 12 August 2014

3ie comment on video tutorials

3ie, the International Initiative for Impact Evaluation, has sent a comment explaining why you need to register if you wish to take the useful tests that come with their video tutorials on impact evaluation. For some technical reason their comment cannot be displayed under my original post (below), so I am taking the liberty to post it right here:

>>Thank you for reviewing the Asian Development Bank-3ie video lecture on quasi-experimental methods. We appreciate your feedback. We would be happy to have you review the other videos in our series.

Your point on the log-in for the quiz is well taken. We are requesting for basic information right now to help the Asian Development Bank screen staff participants for the Making Impact Evaluation Matter conference in early September. From a technical perspective, such a feature also helps us keep spammers at bay. We may however decide to disable this at a later date.<<



Thursday, 7 August 2014

Video tutorials on impact evaluation

The Asian Development Bank (ADB) and the International Initiative for Impact Evaluation (3ie) have published a set of six video lectures on impact evaluation. They are available here. The lectures are presented by different specialists; the slides accompanying the lectures can be downloaded from the same website.

I have taken a look at Dr. Jyotsna Puri's lecture on quasi-experimental methods. Dr. Puri illustrates the use of such methods with a real example she has worked on. The lecture is quite clear, but be prepared for technical jargon and high information density. Also, a basic understanding of mathematics helps - you need to know where to find the x axis and the y axis on a graph. This is not for absolute beginners, but arguably, impact evaluation, in particular the kind of rigorous evaluation 3ie promotes, is not for beginners (nor for the financially meek, by the way).

A nice feature is the quiz that one can take after each lecture. What I like less about it is that you need to log in to take the quiz - what for, I wonder? There are no prizes to gain or diploma to obtain. Only the user's privacy to lose. In my view, that is an unnecessary hurdle which might put off some prospective learners.

Monday, 4 August 2014

Ending violence against women - what works?

In case you have not come across this yet: the UK Department for International Development (DFID) has published a whole series of "How To Notes" and "Evidence Digests" to guide work on violence against women and girls. There is a dedicated web page (click on the link to get there) where you can download the guides. 

The web page includes a link to Violence against Women and Girls Newsletters, which are published at quarterly intervals. The newsletters are rich in information on a wide range of interventions and tools.

Quality and quantity

In this holiday season I visited my sister, who has been passionate about gardening. I brought a beautifully illustrated book for her, about pear orchards in Prussia. Prussia, a belligerent kingdom that ceased to exist in 1918, was mainly known for its military dominance in the region, and for an obsession about order and discipline. So I was hardly surprised to find, in that book, a table showing drawings of differently shaped pears, arranged in neat rows and columns. The roundest pears were displayed near the top left corner, the thinnest, longest ones near the bottom right, with dozens of intermediary states in-between. Every pear came with a drawing of its appearance, as well as a transversal cut, which was criss-crossed by lines and dots dividing it into neat circles and measurements. "Look," I exclaimed, "the Prussians developed a system to classify pears!" My sister took a quick glance and responded, somewhat bitterly, "according to size, of course". Arguably, size was the only aspect of pears that could be reliably and replicably measured in those days. But what does its size say about a pear? It doesn't tell me whether the fruit is ripe or green, hard or soft, sweet or bitter, juicy or dry.  

The Prussians were great simplifiers. They were (in)famous for organising forestry in a way that measured effectiveness only in terms of its marketable yield. Basically, a forest was an amount of timber, nothing else. In "Seeing Like a State - How Certain Schemes to Improve the Human Condition have Failed", 1998, James C.Scott explains how that reductionist approach produced those sad monocultures of same-age trees in rank and file which have disfigured German landscapes ever since. The model was adopted throughout the world, a disaster for peasants and others who had used the many fruits and animals of rich, diverse forests. Rather than learning to produce maps and measurements faithful to the phenomena they found in nature, the Prussians and their followers aligned reality to their systems of measurement, cutting down forests to turn them into easily manageable and measurable plantations.

You have guessed why these musings appear on a blog on development and evaluation. Counting things can be useful, and often it is necessary. If you sell timber, you want to know how many cubic metres you can get out of those trees. But if you're into human development you need to know when quantity ceases to be the only important dimension, and where you have to look for quality to understand a phenomenon. If you do your qualitative homework well, you might even find ways to accurately describe the characteristics you are looking for, and turn those aspects of quality into scales that allows you to put a number to them. 

But it is a risky idea to try and bend reality until it can be counted. (Please, no chopping down of forests just to count the trees.) 

It is a good idea to think about monitoring and evaluation as soon as you start designing a programme. But let's resist the temptation to build only programmes that are geared to produce the kind of change that can easily be measured. Following that temptation would generate a very barren development landscape.

Tuesday, 15 July 2014

We're still alive!

This is the first time I have not posted anything on my blog for 2 months in a row. It has been a very, very busy time. My associate Wolf Stuppert and I have completed our review of evaluation approaches and methods for interventions related to violence against women and girls. If you want to find out more about it, please visit the dedicated blog www.evawreview.de, where you can also download the full review report, as well as the scoping and inception reports. 
Watch this space for more posts in the very near future. Thank you for your patience!

Wednesday, 14 May 2014

Another nice resource on bias

The Center for Evaluation Innovation has published a 15-page booklet called How Shortcuts cut us Short - Cognitive Traps in Philanthropic Decision Making. Click on the title to get to the site where you can download the publication. It explains what "confirmation bias", "escalation of commitment", "availability bias", "bounded awareness" and "groupthink" are about, and how they risk clouding the judgment of grant makers and others working on well-meaning ventures. Anyone who has been involved in grant making will recognise familiar situations.

Hindsight bias, Mr.Wray would say. He has recorded the catchy tune embedded above which summarises the different types of bias that can obstruct intelligent decision-making. You'll find a few extra ones that you may have come across at work, too.

Monday, 14 April 2014

Resources for facilitators

A friend has asked me about resources on facilitation of planning workshops. This makes me realise that my approach in facilitation is fed by many different streams. Some favourite resources in English:
  • One is the "Technology of Participation", developed over a period of about half a century by the Institute of Cultural Affairs. Apparently a recent ToP handbook has been published (late 2013 or early 2014).
  • Another favourite source: "Time to Think" by Nancy Kline. Maybe a bit wordy (pleasant for people who enjoy reading stories about the author's many clients), but definitely worthwhile.
  • On effective low-tech visualisation, "The back of the Napkin" by Dan Roam is full of lovely ideas.
  • Also nice, "Game Storming" by Dave Gray, Sunni Brown and James Macanufo and "Visual Meetings" by David Sibbet.
  • If you're into systems and complexity, "Systems Concepts in Action" by Bob Williams and Richard Hummelbrunner offers inspiration.
  • And then there is that fat compendium of facilitation methods alled "The Change Handbook" compiled by Peggy Holman, Tom Devane and Steven Cady. The descriptions are rather short, but if you don't look for step-by-step guidance, it is quite adequate.
I recommend you start with the first 3 items on the list. More time-tested facilitation tips are on the page "Planning, Strategies, Tools" on this blog. I particularly recommend the posting "Tips for Multi-Everything Facilitation". (Apologies for the messy layout of that page - this blog is an personal unpaid spare time initiative and I find too little time to make it all look neat and smart.)

Friday, 4 April 2014

Review of Evaluations - Inception Report ready

It has been quiet again on this blog - this is because we have been busy producing the Inception Report and pursuing our research on approaches and methods in evaluations of interventions on violence against women and girls. (Apologies I still haven't found a shorter way of saying this!)

You can download our full inception report and find the link to an interesting discussion of our work by Rick Davies on our dedicated review blog www.evawreview.de.

Friday, 7 March 2014

Gentle evaluations for huge projects?

Some weeks ago we – Wolf Stuppert (my associate and co-author of this posting) and I – noticed a call for evaluation proposals that seemed exciting. It was about a 50-million-USDollar initiative in a field where both of us have substantive experience. 

At first sight, the terms of reference (TOR) looked exciting: an ambitious, nationwide programme that would have to be asessed for its replicability in other contexts. But our level of excitement dropped dramatically when we studied the TOR in more detail. Although all DAC criteria were listed – relevance, effectiveness, efficiency, impact, sustainability -, the key questions under those headings seemed suprisingly modest. They focused chiefly on programme process and results among direct programme stakeholders, i.e. the non-governmental organisations that had received grants and free training under the programme. That is, the evaluators would ask those who have obtained those goodies whether they felt the programme was effective. 

The reputed international accounting firm that has run the initiative (and drafted the TOR?) should know that a certain amount of bias might cloud the judgement of people who have drawn such immediate benefits from the programme. 

No mention whatsoever, in the TOR, of the ultimate beneficiaries – the citizens of that country who are expected to enjoy more responsive and accountable governance as a result of the programme. Although the programme has been going on for several years, the only evaluation question about impact is fairly abstract; inviting the evaluators to speculate about the extent to which the outcomes achieved might contribute to longer-term changes.

If the evaluation is supposed to test the replicability of the initiative, it would seem important to scrutinise the theory of change underlying the programme, the way it has been translated into action and the changes it has contributed to. What the TOR calls for falls short of that – by a long way. 

For instance, a special TOR section on potential risks and limitations explicitly rules out quantitative data collection and counterfactuals. Instead, the prospective evaluators are invited to rely primarily on their own judgment, “backed by qualitative evidence” that would be drawn from statements and reports produced by the organisations running the programme. It is unclear what specific risks such restraint is supposed to address – not the risk of discovering the programme has passed unnoticed in the society it is supposed to strengthen, we hope?

We love qualitative research and we feel it is important to gather stakeholders’ views on the programmes they are implementing. And we do not say that every development project needs an impact assessment. In many cases, an exercise that focuses on process and immediate outcomes can be perfectly sufficient. (For instance, many smaller initiatives are so grossly understaffed or underfunded that one can’t expect them to produce any significant results anyway – in such a situation, a combination of in-depth conversations and an experienced evaluator’s own judgment may help to draw attention to necessary adjustments.) 

But if you want to gauge the replicability of a multi-year multi-million-dollar initiative, then you’d better do it with the thoroughness and transparency it takes to produce robust findings. The firm that runs the programme convinces clients around the world to invest massive amounts of money into accountability. It should know how to run the kind of evaluation you need to find out whether a programme works. Are transparency and external scrutiny less important when it comes to one’s "own" programmes? 

Who owns those programmes, anyway? But that opens a different discussion...

Tuesday, 18 February 2014

Tell a story: Evaluations that Make a Difference

The research project "Evaluations that Make a Difference" is looking for stories about evaluations. The idea is to explain what has made evaluations influential or successful in an emotionally more engaging way than scientific publications. A lovely idea! Find more information by clicking on the following link: Call for Stories | Evaluations that Make a Difference