A few weeks ago I attended another public discussion on the (potential) role of evaluation in policy making. The brief conference - basically, a panel discussion followed by a question-and-answers session - was hosted in Berlin by the Hanns Seidel Stiftung and CEVAL, the Centre for Evaluation at Saar University. The panel was made up of German speaking evaluation specialists from Austria, Germany and Switzerland.
Policy uptake of evaluation findings has been a main topic of the International Year of Evaluation 2015– for instance at the Paris conference I wrote about in October. Evidence gathered in evaluations and research is supposed to support political decision making.
One main point raised by representatives of the policy making side at the Paris conference was that evaluations needed to "sing" to influence policy. Evaluation reports that are legible and crisp, with clear conclusions and realist recommendations, supposedly have better chances to be consulted than lengthy dissertations written in an incredibly precise but hard-to-follow language. That seems fairly straightforward.
What intrigued me at the recent Berlin conference was that the panel members' discourse suggested they found it perfectly normal for evaluation reports to be unwieldy and hard to read. True: Many of the evaluation reports published in German that I have come across read like scholarly reports. I have also seen reports which layer heaps of scientifically-sounding gibberish onto a thin analytical basis (a problem that affects not only German speakers). But the reliance on scholarly language in evaluation is intriguing. Different languages, different writing and reading cultures?