Treffer: Applying the GREET checklist for assessing reporting of evidence-based practice educational interventions and teaching showed inter-rater discrepancies and item-level challenges.
Original Publication: Oxford ; New York : Pergamon Press, c1988-
Weitere Informationen
Objectives: This study aimed to analyze in-depth inter-rater discrepancies in Guideline for Reporting Evidence-Based Practice Educational Interventions and Teaching (GREET) item ratings between two first-time GREET users (GREET-naïve) data extractors, and to explore challenges in using the GREET checklist as a tool to assess the completeness of reporting.
Study Design and Setting: This was a secondary analysis, conducted on the literature synthesized in a prior scoping review. Two independent raters, first time users of the GREET checklist, evaluated the trials using a modified version of the 17-item checklist. Item 4 (evidence-based practice [EBP] content) was excluded as inapplicable, while item 5 (educational materials) was subdivided to assess description and accessibility separately. Discrepancies in GREET item ratings and inter-rater agreement were analyzed using descriptive statistics and Cohen's Kappa. We noted challenges in assessing individual GREET items.
Results: We analyzed 161 randomized controlled trials. Initial assessments yielded discrepancies in 20% of item ratings (n = 561/2737), which were reduced to 14% and 2% through successive consensus rounds, ultimately achieving full agreement. The mean number of discrepancies per trial was 3.5. Inter-rater agreement was substantial (κ = 0.616; 95% CI: 0.590-0.642). Highest disagreement rates were observed for items addressing "Environment," "Materials included," "Attendance," and "Adaptations." Detailed analysis revealed that ambiguity in item phrasing and variability in manuscript reporting contributed to inconsistency. Several GREET items were identified as candidates for future refinement.
Conclusion: Although the GREET checklist provides a valuable framework for assessing reporting quality in EBP educational interventions, its application may yield substantial discrepancies between GREET-naïve raters. Clearer item definitions, improved guidance materials, and training could enhance its reliability. Findings support the need for continued refinement of the checklist and underscore the importance of comprehensive and transparent reporting in educational research.
Plain Language Summary: Complete and clear descriptions of teaching interventions help others repeat and build on research. We studied how consistently two first-time users could apply the Guideline for Reporting Evidence-Based Practice Educational Interventions and Teaching (GREET) checklist. The GREET is a tool meant to help authors better report evidence-based practice education. In this study, two independent raters who did not have previous experience with the GREET assessed 161 randomized controlled trials about education for parents/caregivers of children with disabilities. They checked whether those trials adhered to the GREET in reporting various aspects of analyzed educational interventions. For each checklist item, raters marked trials with "Yes/No/Unclear" and copied the text that supported their decision. Disagreements were tracked and then discussed in several rounds to reach a consensus. At first, the raters disagreed on 20% of all item ratings (561 of 2737). After two discussion rounds, disagreements dropped to 14% and then to 2%, and full agreement was achieved after a final consensus step. On average, each trial had 3.5 initial discrepancies. Overall agreement between the two raters was substantial. The most disagreement occurred for items about the learning environment, materials included (what was actually provided and whether it was accessible), attendance, and adaptations (planned changes). Clearer GREET items, such as basic intervention description, theory, and educational strategies, had far fewer discrepancies. Many rating challenges were due to vague wording in GREET and inconsistent reporting in manuscripts. Some GREET items seemed to overlap, and several could benefit from being split into multiple items and having more precise definitions. What does this mean? The GREET checklist is useful, but first-time users can interpret its items differently. Short training and calibration, clearer item definitions, better examples, and guidance on what counts as "materials" or "attendance" would likely improve rating reliability. Authors of trials testing educational interventions can also help by reporting settings, educational materials (and how to access them), attendance, and any planned or unplanned changes with more detail. These steps would make education research more transparent and easier to reproduce.
(Copyright © 2025 Elsevier Inc. All rights reserved.)
Declaration of competing interest The authors declare no conflicts of interest.