Evaluations often come with terms of reference (TOR) that discourage even the most intrepid evaluator. A frequent issue are long lists of evaluation questions that oscillate between the broadest interrogations – e.g. “what difference has the project made in people’s lives” – to very specific aspects, e.g. “what was the percentage of women participating in training sessions”. Sometimes I wonder whether such TOR actually state what people really want to find out.
I remember the first evaluation I commissioned, back in the last quarter of the 20th century. I asked my colleague how to write TOR. She said, “Just take the TOR from some other project and add questions that you find important”. I picked up the first evaluation TOR I came across, found all the questions interesting and added lots, which I felt showed that I was smart and interested in the project. Then I shared the TOR in our team and others followed suit, asking plenty more interesting questions.
I wonder whether this type of process is still being used. Typically, at the end, you have a long list of “nice to know”-questions that'll make it very hard to focus on questions that are crucial for the project.
I know I have written about this before. I can’t stop writing about it. It is very rare that I come across TOR with evaluation questions that appear to describe accurately what people really want and need to find out.
If, as someone who commissions the evaluation, you are not sure which questions matter most, ask those involved in the project. It is very useful to ask them, anyway, even if you think you know the most important questions. If you need more support, invite the evaluator to review the questions in the inception phase – with you and all other stakeholders in the evaluation – and be open to major modifications.
But please, keep the list of evaluation questions short and clear. Don’t worry about what exactly the evaluator will need to ask or look for to answer your questions. It is the evaluator’s job to develop indicators, questionnaires, interview guides and so forth. She’ll work with you and others to identify or develop appropriate instruments for the specific context of the evaluation. (The case is somewhat different in organisations that attempt to gather a set of data against standardised indicators across many evaluations - but even then, they can be focused and parsimonious to make sure they get high quality information and not just ticked-off boxes.)
Even just one or two evaluation questions is a perfectly fine amount. Anything more than ten can get confusing. And put in some time for a proper inception phase when the evaluation specialists will work with you on designing the evaluation. Build in joint reflection loops. You’ll get so much more out of your evaluation.
Provided the right short list of questions is chosen, couldn't agree more. One frequently gets evaluation TOR which are loaded with every question under the sun and then some. I suspect this is often due to the commissioner - person or organisation - preening it/she/himself on its sophisticated professionalism
ReplyDeleteWonderful job and congratulations to you! I'm from Mali, an Expert Consultant, University Teacher and Searcher.
ReplyDeleteI'd like be parteniring with you.
Salomon