CGIAR’s Quality Assurance Process: A snapshot of what it is and what is does
- From
-
Published on
16.06.25

CGIAR employs a rigorous quality assurance (QA) process to ensure that its technical reporting is accurate and backed by high-quality evidence. This is crucial for CGIAR’s transparency and decision-making and for its accountability to its many donors and partners.
The QA process involves independently evaluating in a multi-round validation process the key results submitted by CGIAR research programs each year against established criteria and evidence. The dedicated QA Platform employed for this process is integrated into CGIAR’s online reporting tool, called the Performance and Results Management System (PRMS). This QA Platform provides a user-friendly way for the result submitters and assessors to exchange comments and to track progress. The resulting quality-assured data are then used to inform CGIAR’s Results Dashboard, technical and annual reports, Portfolio Narrative, and much else. As the volume of results processed through this QA system is increasing, a significant current focus is on enhancing both the efficiency and the accuracy of the QA process, particularly through the integration of artificial intelligence (AI) tools.
What data undergo QA?
The data that undergo QA include results reported as “Outputs”, including knowledge products and innovation development, and results reported as “Outcomes”, such as policy changes, innovation use, and Innovation Packages. All key data points (e.g., titles, geographic scope) are subject to rigorous validation, with a focus on the reported “Readiness and Use Levels” and “Number of Users” for innovations. These data are quality assured either in one round (low-priority data points) or in two rounds (high-priority data points). In terms of knowledge products, peer-reviewed publications and monitoring, evaluation, learning, and impact assessment (MELIA) studies undergo full QA, with the metadata of other knowledge products reviewed by the Knowledge Managers of CGIAR’s 15 Centers.
How the QA process works
The QA process is managed by CGIAR’s Portfolio Performance Unit, with a team leader and subject matter experts and impartial QA assessors who are a mix of CGIAR staff and external consultants. It is conducted in six steps and in two or more batches each year to manage workloads and avoid duplication. Supporting materials are tailored for different users (submitters, assessors, third-party adjudicators, staff of the Portfolio Performance Unit) and include a QA Platform User Manual, a detailed description of the QA process, and QA Assessor Guidance.
In a first round of QA work, two assessors, including one lead assessor, independently review and suggest changes/corrections to the data submitted and provide their rationale for any changes suggested for data in priority fields. With two weeks to address comments made by the assessors, the data submitters then validate, implement, or disagree with the modifications suggested. In a second round, the lead assessors highlight any unimplemented changes/persistent disagreements and a qualified third-party decision-maker brokers agreement or instructs results to not be reported if the relevant criteria are not met. Staff of the Portfolio Performance Unit then implement any third-party-instructed changes in the PRMS and inform the submitters about the changes.
QA performance
In 2024, this QA process handled a total of 5,152 results, which was nearly 8 percent more than it handled in 2023 and nearly 90 percent more than in 2022. The rate at which the submitters accepted the assessors’ comments was consistently high at over 80 percent. Of 792 outcomes submitted in 2024, 738 (93 percent) were confirmed following the QA process.
Improving the QA process
Ongoing efforts by CGIAR to optimize its QA process involve automating the assessments of some fields with artificial intelligence (AI). CGIAR is also working to improve the focus of its QA process by assessing the results reported against the targets reported in the theories of change. CGIAR is working to increase consistent use of the QA process by improving the guidance materials it provides, by training the QA assessors, and by establishing a community of practice working on QA across CGIAR.
Integrating AI in QA
CGIAR is integrating AI-powered tools into its QA process to enhance the accuracy, efficiency, and consistency of its QA. Two initial AI focus areas are reducing the time needed to review evidence for determining the levels of “Innovation Readiness” and the scoring of the CGIAR “Impact Areas” targeted. Using OpenAI’s GPT-4o and Python, an AI Helper Tool was developed in 2024, along with an evidence extractor and custom prompts. The AI Helper Tool automates the scoring of Impact Areas and the determination of innovation Readiness Levels based on defined criteria and evidence. From February 2025, this Tool has been integrated into the PRMS QA Platform for large-scale testing. Ongoing testing shows that there is strong alignment between AI and human assessments but further improvements in accuracy are still needed, mostly due to lack of a large, reliable reference dataset of expert-validated Impact Area scores and innovation Readiness Levels. Assessors found the AI Helper Tool saved time, allowing them to focus on other review aspects.
Conclusion
CGIAR’s QA process and its on-going work to make it more efficient and accurate highlight CGIAR’s commitment to continuous improvement and to leveraging technology to further strengthen its technical reporting.
By Mariagiulia Mariani, CGIAR Portfolio Performance Unit