Let me explain. For those readers who are not evaluators - just try to imagine you are one, for these two paragraphs (and maybe also when you draw up your next TOR). You are developing an offer for an evaluation, or you have won the bid already and you're preparing the inception report. You sit at your table, alone, or around a table with your evaluation team mates and you gaze at the TOR page - or even pages - of evaluation questions. Lists of 30-40 items totalling 60-100 questions are not uncommon these days. Some questions are broad - of the type, "how relevant is the intervention in the local context", some extremely detailed, for instance "do the training materials match the trainers' skills". (I am making these up but they are pretty close to real life.) Often, in the sector of much of my evaluation activity, the questions are roughly structured along the OECD/DAC criteria for evaluation, which are OK. But your specific evaluation might need a different structure to match the logic of the project - think of human rights work or political campaigns, for example.
Why do organisations come up with those long lists of very specific questions? As an evaluator and as the author of several meta-evaluations, I have two hypotheses:
- Some evaluations are shoddy. Understandably, people in organisations that have experienced sloppily done evaluations may want to take greater control of the process and they don't realise that tight control means losing learning opportunities.
- Many organisations adhere to the very commendable practice of involving many people in TOR preparation - but their evaluation department is shy about filtering and tightening the questions so that they form a coherent, manageable package.
What can we do about it? Those who develop TOR should focus on a small set of central questions they would like to have answered - if your budget has less than six digits (in US$ or euros), try to stick to five to ten really important questions - less is more. Build in time for an inception report, where the evaluators must present how they will answer the questions, and what indicators or what guiding questions they'll use in their research. Read that report carefully to see whether it addresses the important details you are looking for - if it doesn't and if you still feel certain details are important, then discuss them with the evaluators.
My advice to evaluators is not to surrender too early - often, clients will be delighted to be presented with a restructured, clearer set of evaluation questions, if your proposal makes sense. If they can't be convinced to reduce their questions, then try to find an agreement as to which questions should receive most attention, and explain which cannot be answered with a reasonable degree of validity. This may seem banal to some among you, but to tell from many evaluation reports in the development sector, it doesn't always happen.