-are able to assess the quality of an evaluation according to common standards,
-know which criteria should be used therefore and how they can be applied.
-They evaluate evaluations in practice and thereby learn about the added value of a systematic approach to it.
-Finally, the participants will gather an overview about practical tools and applications for meta-evaluations.
Do evaluations always provide the ‘right’ information? I mean, are they valid, reliable, correct in a sense? Can we believe their findings? And after all, what is the merit of our work to society, or at least for the people who read our reports?
Not only evaluators contemplate about such questions from time to time. More and more commissioners and political decision-makers want to know if they can trust evaluations and what they can actually learn from them. Moreover, the added value of such exercises for providing the empirical basis for decision making remain often obscure. Hence it is no wonder that the demand for meta-evaluations rises.
We all know what evaluations are good for and how we should implement them, don’t we? There are numerous textbooks, training programs, podcasts, which provide methodological and practical guidance. Complying with such standards and following scientific codes of conduct should warrant valid, reliable and useful findings. – But can we prove it? How do we actually find out about the methodological quality of an evaluation, the validity and reliability of its findings, and eventually the usefulness of its conclusions and recommendations?
Implementing institution/s or person/s (incl. short description)
Stefan Silvestrini (CEval, Universität Saarbrücken)
19.10. - 23.10.2020
Comissioners, Evaluators, Policy Makers
Participant need to have a basic understanding about evaluation as a management tool. While it is not necessary to have conducted an evaluation, ideally, the participants should have read a number of evaluation reports before and know about their strengths and weaknesses.