Monday, 22 August 2016

Getting those indicators right

In the world I work in – social development – the terms "theory of change" and "indicator" have become essential. You can't just try doing something that might improve people's lives. You need to explain why and how the things you (plan to) do will contribute to the improvements you want to happen, i.e. you need to develop a theory of change. A common way to visualise a theory of change is the logical framework, which ideally shows how interconnected activities and their immediate products (“outputs”) are supposed to contribute to further-reaching changes, i.e. goals, objectives, outcomes, impact and so forth.

Your theory of change can be more or less abstract, more or less detailed and restrictive, more or less adaptable. But nowadays in international social development work, the standard is that you need to explain (i) what change/improvements in the situation you want to achieve and (ii) how you intend to make sure it happens. A whole vocabulary comes with theories of change and results-based management. People talk about the logic leading from "outputs" to "outcomes", or from "objectives" to "impact", or more complicated (matrices of) cause-to-effect-chains combining all those terms and more. 

The term “indicator” has become so common that I start to hear strange things like:
“We’ve got to reach our indicators.”
“They won’t obtain their indicators if they continue like that.”
“Their indicators are not sustainable.”

Eh?
An indicator is something that you use to measure a phenomenon. For example, if you want to assess a person’s weight, you can ask her to stand on a scale, which measures how many kilos the person weighs. To determine whether anything changes (or not), you take a measurement several times, at intervals that allow the change to happen. 

For instance, to assess whether a child grows, you can measure her height every year. Or, if I have a flu and I want to know whether I get better, I measure my body temperature every few hours. If a farmer wants to know whether she earns enough income from her crops, she counts the cash she has received for the crops and subtracts the costs she has incurred. A person’s weight and height, a person’s body temperature and the cash a person has at the end of an economic operation are indicators.

It doesn’t matter whether my fever drops or not – I will always have a body temperature. That is, there will always be something that you can measure against that indicator. That is, it makes no sense to talk about “reaching indicators”. What one can strive to produce are results, outcomes, outputs. But you will always “reach” an indicator, regardless of your weight, temperature or income. The indicator is just the concept you use for measurement. In most cases (at least when we talk about social development), the indicator represents just a tiny part - a clue - of the improvements in people's lives you want to obtain.

Apart from the sloppy use of precise vocabulary, trying to “reach indicators” carries a real risk: It can divert attention from the goal that needs reaching or the results that are expected from an activity. For example, if I have a fever, I can submerge my body in crushed ice for half an hour and then measure my temperature in an armpit: Most likely, the thermometer will show a lower reading even though my cold might turn into a deadly pneumonia as a result of the procedure. I would have made progress against the indicator, but I would have sabotaged the aim to cure my cold.

Indicators do not deserve to be treated as goals. Every so often, the development world learns the lesson the hard way. For instance, in the early years of "Education for all" campaigning, success ways measured by calculating the percentage of school age children who were enrolled at primary schools. Obtaining "good indicators" - or rather, high achievement rates measured against the indicator of school enrolment - is easy: You make sure everyone gets their sons and daughters registered at school once a year. Surprise, surprise: not all of them stay at school, some - sometimes many, and especially girls - drop out even before they complete the first couple of school years. Getting registered doesn't give you an education. You need to stay at school. Nowadays, those interested in progress in education measure both the enrolment rate and the retention rate, i.e. the percentage of children who complete full primary education.

The retention rate is still a very weak indicator for effective education. Children whose educational attainment you try to measure might have been subjected to unsanitary conditions in overcrowded classrooms, violent teaching, sexual abuse and fake exams, while completing all years of primary education. Some of them might have been better off staying at home. So, to get a full picture of educational attainment, you need to look into the quality of teaching as well. Still, it is useful to measure enrolment and retention rates.

Indicators will never capture the entire goal that you strive to reach. They measure aspects that suggest you’re doing the right thing, or that the change you want to contribute to is actually happening. (Btw. don't forget to remember that the desired change might happen in total independence from or despite the things you are doing.)

This is a plea for a bit of rigour when using the term "indicator". Misusing words risks emptying them of their meaning. In social development, misuse of that term might even sabotage the purpose which you have introduced the term for - more effective social development work.

Tuesday, 19 July 2016

Homa Hoodfar

Some days ago a friend forwarded a message to my inbox - "Homa Hoodfar indicted on unknown charges", it said. 

I met Professor Homa Hoodfar in 2009, at a conference on gender and religion hosted by the Böll Foundation in Berlin. Impressed by the workshop facilitated by Homa, I wrote about that event here. The group #FreeHoma offers a site that shows a selection of her publications.

I do not understand why an internationally respected academic gets arrested when visiting her homeland. What are the charges brought against her? Why? 

My thoughts are with Homa and all those who want to see her freed.

Sadly, women human rights defenders are threatened throughout the world. For more reading, I recommend AWID's site on the issue, which includes recommendations for holistic protection. 

Thursday, 26 May 2016

Let's evaluate together

This is the time of the year when I would like to be able to clone myself, to respond to all those requests for evaluation proposals (RFPs) while busily working away on on-going jobs that need to be completed before the Northern hemisphere summer break breaks out. List servers publish new RFPs every day; as July approaches, the deadlines become increasingly adventurous. In late May, RFPs ask for offers to be submitted by the first week of June; the selected evaluation team would start working right away. It seems many of those who publish those last-minute quick-quick RFPs assume evaluation consultants spend their days sitting in their offices, twiddling thumbs, chewing nails or randomly surfing the web, waiting for that one agency to call them up and get them to work right away, tomorrow! Drop everything and work for us!

Many of those evaluations are mid-term or end-of-project evaluations, which tend to happen at highly predictable moments (in the middle or near the end of project implementation) and could be planned many months, even years ahead. But this is not what worries me most about the seasonal avalanche of RFPs. What worries me most is that they tend to produce evaluations of questionable value.

Often, those last-minute RFPs are about projects of modest size, with meagre resources for evaluation. In that situation, the evaluation terms of reference (TOR) would typically ask for 20-40 consulting days, to cover the entire set of OECD/DAC criteria - relevance, effectiveness, efficiency, and impact and sustainability, all that within 2-3 months and on a shoestring budget. As someone who has reviewed a couple of hundred evaluations, I know that the resulting evaluation reports tend to be a bit on the shoddy side. With some luck, the participants in the evaluation might have found the evaluation process useful. But don't look for ground-breaking evidence in quick and dirty single-project evaluations.

It does not have to be that way. For instance, organisations that receive money from several funders can convince their funders to pool resources for one well-resourced evaluation of their overall activities rather than a bag of cheap three-week jobs. Funders who support several complementary initiatives in the same geographical region, or who support the same kind of project in many different places, can commission programme evaluations to better understand what has worked and what hasn't, under what circumstances.

It makes more sense to take a step back and look at bigger pictures, anyway, because no development intervention happens in isolation. Project X of NGO Y might yield excellent results because NGO Z runs project ZZ in the same region, and project X wouldn't have the slightest chance to succeed if project ZZ wasn't there. You need time and space to find out that kind of things.

And last but absolutely not least, there is no reason why evaluation should only happen in the middle or at the end of an intervention. Some of the most useful evaluations I have come across have been built into the project or programme from the beginning, supporting programme managers in setting up monitoring systems that worked for those involved in the programme and for those evaluating it, and accompanying the project with on-going feed-back. This doesn't need to be more expensive or more complicated than the usual end-of-project 40-day job. But it can provide easy-to-use information in time to support well-informed decision-making while the project is being implemented - not just when it's over.

Monday, 18 April 2016

Work to be done: Ending violence against children

A recent report on the global prevalence of violence against children in the past year has shown that  more than half of the children in 96 countries across the world —1 billion children aged 2–17 years—experienced violence in the past year. Violence against children is a human rights violation. It makes people more likely to fall ill, and to  experience and perpetrate violence in their adult lives. In other words, violence is passed on through generations - even genetically, as it can alter a child's genes.

Sustainable Development Goals (SDG) call for an end to “abuse, exploitation, trafficking and all forms of violence against and torture of children” (SDG 16.2) and to “eliminate all forms of violence against all women and girls in the public and private spheres, including trafficking and sexual and other types of exploitation” (SDG 5.2). SDG 4 on education refers to the importance of promoting non-violence in several sub-goals, e.g. by calling for a non-violent environment for education (SDG 4.a).

With probably more than half of the world's children experiencing violence, major efforts are needed to attain the SDGs. For inspiration, have a look at UNICEF's Six Strategies for Action. If you know of any useful resources to share, please post a comment and share.

Friday, 1 April 2016

3500 evaluation reports for everyone - really everyone?

It is delightful to see that more and more agencies are publishing more and more evaluation reports on-line. Now, UNDP has announced, in a pretty infographic, its revamped Evaluation Resource Centre (ERC), which gives access to more than 3500 reports. A bounty for meta-evaluators!

But one thing that puzzles me: The video spot that explains the ERC, with its suave male speaker and friendly ambient music in the background, suggests that only men - or, say, short-haired trouser-wearing necktie-bearers - make decisions. Look at the visuals near minute 0.57 and 2.18. Little skirt-bearers are only acknowledged as members of "the public". What time and place do we live in? Dear UNDP! We know you can do much better on promoting gender equality, so why not flaunt it and show at least equal numbers of male and female decision-makers in your PR materials?

Thursday, 31 March 2016

More basic terminology

Here's another set of concepts that seem to cause a great deal of confusion. They are much used in results-oriented planning (often called results-based management or RBM). I like to explain them as follows:

Output = The direct result of an activity - something that is under your/ your project's control. For instance, I brush and floss my teeth several times a day, and the output is a clean set of teeth.

Outcome = Something that your activity is designed to help produce - but it takes some more factors for that kind of result to come about. For instance, I clean my teeth to avoid getting caries, so healthy teeth are my desired outcome. But my chances to have good teeth are much enhanced if I avoid eating sweets or very acid food, if I have healthy gums, if I have the right kind of genes, and so on. Even people with clean teeth get caries.

Impact = A long-lasting result that can be directly traced to an intervention. For example, if my dentist extracts a tooth, the impact is a gap in my mouth. 

Tuesday, 15 March 2016

Evaluation terminology

Today a friend has asked me about the difference between findings and conclusions. I put it this way:

Findings:
  • Dust has gathered into small woolly clouds in the corner of the room.
  • Crumbs are scattered all over the floor.
  • There are a couple of spiderwebs in the corners of the ceiling.
Conclusion: This room is dirty.
Recommendation: Clean it.

Also a nice way to explain indicators.

Busy!

Deep into the evaluation of this exciting project, www.womenonthefrontline.eu Will be back by April with new posts...

Thursday, 21 January 2016

Happy New Year, good new reading

Happy new year! For me, 2016 starts with an exciting evaluation assignment spanning some 30 organisations in 7 countries. Which makes that I have a whole collection of topics I would like to write about here, but no time to do so at this point.

So I would like to recommend good new reading: DFID has just published the guidance note Shifting Social Norms to Tackle Violence against Women and Girls that draws on the growing body of literature on the topic. In my view, the best part of it are chapters 3-6 on Social Norms Theory and how to integrate it into programme design, all explained in relatively clear, straightforward terms.

Saturday, 12 December 2015

Writing cultures

A few weeks ago I attended another public discussion on the (potential) role of evaluation in policy making. The brief conference - basically, a panel discussion followed by a question-and-answers session - was hosted in Berlin by the Hanns Seidel Stiftung and CEVAL, the Centre for Evaluation at Saar University. The panel was made up of German speaking evaluation specialists from Austria, Germany and Switzerland.

Policy uptake of evaluation findings has been a main topic of the International Year of Evaluation 2015– for instance at the Paris conference I wrote about in October. Evidence gathered in evaluations and research is supposed to support political decision making.

Monday, 23 November 2015

Virtual Workshopping

Earlier this week I facilitated an internal reflection and planning meeting with evalux, a Berlin-based evaluation firm which celebrated its 10th anniversary this year. One of the workshop participants was based in Beijing. It would have been too onerous to fly her over to Berlin, so we found a way to beam her into the workshop via the internet.

I like highly participatory workshops, where people work in alternating configurations –

Wednesday, 14 October 2015

Participatory research!

Wow - read this presentation of participatory research by 16-24-year-old girls and young women in Kinshasa. An exciting piece of work supported by the UK Department for International Development (DFID) and Social Development Direct (SDD). The initiative turns the "objects" of research into researchers. I trust it will yield much richer information than what you would get from a "top-down" externally-designed survey on young women in Kinshasa. And the young people who collect and analyse the information will gather skills, knowledge and strength in the process! I would expect their interviewees to benefit from the process, too.

Governments which fund development want to see "evidence-based" approaches, that is, research needs to be built into development. Fortunately, the widespread misconception that only large-scale quantitative surveys and experiments yield reliable evidence appears to be fading.

Monday, 12 October 2015

Good things happen in the short term and bad things happen in the long term

This is a long title but I love that sentence, culled from Elliot Stern’s intervention on the Benefits and Barriers to Evaluation Use at the recent evaluation conference in Paris. The one-day conference, convened jointly by the European Evaluation Society, France’s evaluation society, the United Nations Organisation for Education, Science and Culture (UNESCO) and the Organisation for Economic Cooperation and Development (OECD), took place at the quite extraordinary UNESCO headquarters in Paris.

Wednesday, 23 September 2015

My data

A couple of days ago a colleague working on an interesting new e-learning tool invited me to test an initial, yet unofficial version of that tool. I clicked on the link they had sent to me. A screen appeared which asked me for my full name, my e-mail address and my company. Every single field was mandatory, that is, I could not move to the subsequent screen without providing my name, my e-mail address and a company name.

That is a threshold. When you open a book or a newspaper, no-one asks you to send your name, your e-mail address or other personal data. You open the thing and you read it. The publisher can track the number of sold books - to some extent - the places where they have been sold, and that's it. Has anyone ever complained about that?

Wednesday, 16 September 2015

Interesting debate on evaluating human rights work

Who is evaluation of human rights work for? How about "strategic plausibility" as an evaluation criterion? How do we measure success when protecting civilians in conflict? These are the kinds of questions discussed in this web debate on evaluating human rights work. Very commendable!