Abstract :
[en] The agreement between two raters judging items on a categorical scale is traditionally assessed by Cohen’s kappa coefficient. We introduce a new coefficient for quantifying the degree of agreement between an isolated rater and a group of raters on a nominal or ordinal scale. The group of raters is regarded as a whole, a reference or gold-standard group with its own heterogeneity. The coefficient, defined on a population-based model, requires a specific definition of the concept of perfect agreement. It has the same properties as Cohen’s kappa coefficient and reduces to the latter when there is only one rater in the group. The new approach overcomes the problem of consensus within the group of raters and generalizes Schouten’s index. The method is illustrated on published syphilis data and on data collected from a study assessing the ability of medical students in diagnostic reasoning when compared with expert knowledge.
Scopus citations®
without self-citations
23