Long-awaited new guidance on applying the evaluation criteria defined by the Development Assistance Committee of the Organisation for Economic Cooperation and DevelopmentD) (OECD-DAC) is finally available in this publication! Long-awaited, because evaluators and development practitioners have grown desperate with assignments that are expected to gauge every single project against every single OECD-DAC criterion, regardless of the project's nature, and of the moment & resources of the evaluation. This new, gently worded document is a weapon evaluators can use to defend their quest for focus and depth in evaluation.
Those who commission evaluations, please go straight to page 24, which states very clearly: "The criteria are not intended to be applied in a standard, fixed way for every intervention or used in a tickbox fashion. Indeed the criteria should be carefully interpreted or understood in relation to the intervention being evaluated. This encourages flexibility and adaptation of the criteria to each individual evaluation. It should be clarified which specific concepts in the criteria will be drawn upon in the evaluation and why."
On page 28, you will find a whole section titles Choosing which criteria to use which makes it clear that evaluations should focus on the OEC-DAC criteria that make sense in the view of the needs and possibilities of the specific project, and for the evaluation process. It provides a wonderful one-question heuristic: "If we could ask only one question about this intervention, what would it be?" And it reminds readers that some questions are better answered by using other means, such as research projects or a facilitated learning process. The availability of data and resources - including time - for the evaluation helps determine which evaluation criteria to apply, and which not. Page 32 reminds us of the necessity to use a gender lens, with a handy checklist-like table on page 33 (better late than never).
About half of the publication is dedicated to defining the six evaluation criteria - relevance, coherence, effectiveness, efficiency, impact, and sustainability - with plenty of examples. This is also extremely helpful. Each chapter comes with a table that summarises common challenges related to each criteri on - and what evaluators and evaluation managers can do to overcome them. It also shows very clearly that lack of preparation on the evaluation management side makes it very hard for evaluators to do a decent job - see for example table 4.3 (p.55) on assessing effectiveness.
The document is a bit ambiguous on some questions: The chapter on efficiency still defines efficiency as the conversion of inputs (...) into outputs (...) in the most cost-effective way possible, as compared to feasible alternatives in the context" (p.58), which makes it extremely hard to assess the efficiency of, say, a project that supports litigation in international courts - interventions that may take decades to yield the desired result. However, the guidance document states that resources should be understood in the broadest sense and include full economic costs. On that basis, one can indeed argue, as Jasmin Rocha and I have on Zenda Ofir's blog, that non-monetary costs, hidden costs and the cost of inaction must be taken into account. Yet, table 4.4 on efficiency-related challenges remains vague (p.61). Has anyone read the reference quoted in the table (Palenberg 2011)? I did and found it very cautious in its conclusion. My impression is that in many cases, evaluators of development interventions are not in a position to assess efficiency in any meaningful manner.
On the whole, I would describe the new OECD-DAC publication as a big step forward. I warmly recommend it to anyone who designs, manages or commissions evaluations.