Treffer: Measuring agreement among several raters classifying subjects into one or more (hierarchical) categories: A generalization of Fleiss' kappa.

Title:
Measuring agreement among several raters classifying subjects into one or more (hierarchical) categories: A generalization of Fleiss' kappa.
Authors:
Moons F; Freudenthal Institute, Utrecht University, PO Box 85170, 3508 AD, Utrecht, the Netherlands. f.moons@uu.nl.; Antwerp School of Education, University of Antwerp, Antwerp, Belgium. f.moons@uu.nl., Vandervieren E; Antwerp School of Education, University of Antwerp, Antwerp, Belgium.
Source:
Behavior research methods [Behav Res Methods] 2025 Sep 15; Vol. 57 (10), pp. 287. Date of Electronic Publication: 2025 Sep 15.
Publication Type:
Journal Article
Language:
English
Journal Info:
Publisher: Springer Country of Publication: United States NLM ID: 101244316 Publication Model: Electronic Cited Medium: Internet ISSN: 1554-3528 (Electronic) Linking ISSN: 1554351X NLM ISO Abbreviation: Behav Res Methods Subsets: MEDLINE
Imprint Name(s):
Publication: 2010- : New York : Springer
Original Publication: Austin, Tex. : Psychonomic Society, c2005-
References:
J Clin Epidemiol. 1990;43(6):543-9. (PMID: 2348207)
J Hand Surg Am. 2024 May;49(5):482-485. (PMID: 38372689)
Br J Math Stat Psychol. 2008 May;61(Pt 1):29-48. (PMID: 18482474)
J Psychiatr Res. 1981;16(1):29-39. (PMID: 7205698)
Biometrics. 1980 Jun;36(2):207-16. (PMID: 7190852)
Stat Med. 2002 Jul 30;21(14):2109-29. (PMID: 12111890)
Grant Information:
1S95920N Fonds Wetenschappelijk Onderzoek
Contributed Indexing:
Keywords: Chance-corrected; Fleiss’ kappa; Hierarchical categories; Inter-rater agreement; Inter-rater reliability; Multiple categories; Weighted categories
Entry Date(s):
Date Created: 20250915 Date Completed: 20250916 Latest Revision: 20251009
Update Code:
20251009
PubMed Central ID:
PMC12436533
DOI:
10.3758/s13428-025-02746-8
PMID:
40954368
Database:
MEDLINE

Weitere Informationen

Cohen's and Fleiss' kappa are well-known measures of inter-rater agreement, but they restrict each rater to selecting only one category per subject. This limitation is consequential in contexts where subjects may belong to multiple categories, such as psychiatric diagnoses involving multiple disorders or classifying interview snippets into multiple codes of a codebook. We propose a generalized version of Fleiss' kappa, which accommodates multiple raters assigning subjects to one or more nominal categories. Our proposed INLINEMATH statistic can incorporate category weights based on their importance and account for hierarchical category structures, such as primary disorders with sub-disorders. The new INLINEMATH statistic can also manage missing data and variations in the number of raters per subject or category. We review existing methods that allow for multiple category assignments and detail the derivation of our measure, proving its equivalence to Fleiss' kappa when raters select a single category per subject. The paper discusses the assumptions, premises, and potential paradoxes of the new measure, as well as the range of possible values and guidelines for interpretation. The measure was developed to investigate the reliability of a new mathematics assessment method, of which an example is elaborated. The paper concludes with a worked-out example of psychiatrists diagnosing patients with multiple disorders. All calculations are provided as R script and an Excel sheet to facilitate access to the new INLINEMATH statistic.
(© 2025. The Author(s).)

Declarations. Ethical Approval: Not applicable. Conflicts of Interest: We have no conflicts of interest to disclose. Consent to Participate: Not applicable. Consent for Publication: Not applicable. Open practice statement: All materials and analysis code are available as Supplementary material. We provide an Excel spreadsheet containing all examples discussed in the paper, as well as an R script with the required data files. A detailed proof of Theorem 1 is also included. For the most recent version of the materials, additional code (e.g., for additional statistical programs), and updates, please refer to the OSF project: https://osf.io/q5nft/ . None of the reported studies were preregistered. An earlier version of this manuscript has been published as an arXiv preprint prior to formal peer review and publication: https://doi.org/10.48550/arXiv.2303.12502 .