What emerged from the online discussion on evaluating Quality of Science?

  • From
    Independent Advisory and Evaluation Service
  • Published on
    09.05.22

Share this to :

From March 28th until April 19th, CAS/Evaluation co-hosted with the FAO Office of Evaluation a discussion on EvalForward. We hope at least some of you were able to follow it.

While overall on a broad topic, this discussion was part of a work and consultations towards development of guidelines to operationalize Quality of Science evaluation criterion in the revised CGIAR Evaluation Policy.

As many as 22 participants from a range of backgrounds, including non-evaluators and non-CGIAR affiliate, engaged by posting their contribution. Among them were CGIAR affiliates, FAO affiliates, donors, independent consultants, evaluators, researchers, university professors, social media experts and leaders of private organizations. Special thanks go to contributors who were or are members of the MEL COP, including Keith Child, Claudio Proietti and Valentina de Col. The richness of discussion by a diverse range of experts highlights an overall agreement on the importance of framing and use of context-specific evaluation criteria for contextualizing evaluations of science, technology, and innovation.

Through the discussion the following frameworks were introduced: Quality of Research for Development (QoR4D) frame of reference (directly or indirectly linked to the evaluation criteria: relevance, legitimacy, effectiveness and scientific credibility), Research Excellence Framework (REF),and the RQ+ Assessment Instrument.

Share this to :