INTERRATER RELIABILITY AND AGREEMENT OF PERFORMANCE RATINGS - A METHODOLOGICAL COMPARISON

Citation
Jw. Fleenor et al., INTERRATER RELIABILITY AND AGREEMENT OF PERFORMANCE RATINGS - A METHODOLOGICAL COMPARISON, Journal of business and psychology, 10(3), 1996, pp. 367-380
Citations number
19
Categorie Soggetti
Business,"Psychology, Applied
ISSN journal
08893268
Volume
10
Issue
3
Year of publication
1996
Pages
367 - 380
Database
ISI
SICI code
0889-3268(1996)10:3<367:IRAAOP>2.0.ZU;2-C
Abstract
This paper demonstrates and compares methods for estimating the interr ater reliability and interrater agreement of performance ratings. Thes e methods can be used by applied researchers to investigate the qualit y of ratings gathered, for example, as criteria for a validity study, or as performance measures for selection or promotional purposes. Whil e estimates of interrater reliability are frequently used for these pu rposes, indices of interrater agreement appear to be rarely reported f or performance ratings. A recommended index of interrater agreement, t he T index (Tinsley & Weiss, 1975), is compared to four methods of est imating interrater reliability (Pearson r, coefficient alpha, mean cor relation between raters, and intraclass correlation). Subordinate and superior ratings of the performance of 100 managers were used in these analyses. The results indicated that, in general, interrater agreemen t and reliability among subordinates were fairly high. Interrater agre ement between subordinates and superiors was moderately high; however, interrater reliability between these two rating sources was very low. The results demonstrate that interrater agreement and reliability are distinct indices and that both should be reported. Reasons are discus sed as to why interrater reliability should not be reported alone.