Reflections on workshop: Adaptive co-design of research for development (R4D) evaluations – the need for engagement, learning and ongoing reflection

  • From
    Independent Advisory and Evaluation Service
  • Published on
    19.05.23

Share this to :

How should the recently developed Quality of Research for Development (QoR4D) Evaluation Guidelines be applied to the evaluation of research processes and performance? This was the purpose of a workshop held in Rome on 27‒28 February 2023 (see report). The focus was on widening conventional science-focused assessments to incorporate engagement, learning, and impact.

As background, the shift from assessing quality of science to considering the quality of research more broadly in CGIAR emerged from a Green Paper by Harvard University and a subsequent 2016 directive from the Independent Science and Partnership Council (ISPC) to develop a framework for QoR4D based on the founding principles of “salience, credibility, and legitimacy.”

The result was a framework consisting of 4 elements and 17 criteria for evaluation. Participants in the workshop were keen to learn how to apply this framework to the huge range of actions being evaluated in ways that are timely, improve design, and enhance assessment of effectiveness. This issue of usefulness to implementors and practitioners was of particular interest, reflecting participants’ desire for tools that are relevant and useful in the widest range of real-world situations.

A key theme that emerged in discussion was the tension between self-assessment and peer review. A recent advance in science assessment worldwide is the use of a mixed-methods framework to inform internal and external peer reviews, with impact evaluations guided by co-created theories of change. The challenge is to ensure that all relevant stakeholders are involved in co-creation of the theory of change.

Another interesting challenge is achieving consistency of judgment, using comparable quality criteria across a wide range of research projects with diverse performance expectations in different contexts. Much can be learned from the QoR4D evaluations of other organizations, which add value at different entry points, such as priority setting, design, implementation, evaluation, communication, predictive science, and reflective science. There was agreement that the CGIAR QoR4D Guidelines had wider use and application beyond CGIAR. Their uptake and use by donors, scientists, and partners should therefore be monitored.

Share this to :