This study used meta-analytic methods to compare the interrater and in
trarater reliabilities of ratings of 10 dimensions of job performance
used in the literature; ratings of overall job performance were also e
xamined. There was mixed support for the notion that some dimensions a
re rated more reliably than others. Supervisory ratings appear to have
higher interrater reliability than peer ratings. Consistent with H. R
. Rothstein (1990), mean interrater reliability of supervisory ratings
of overall job performance was found to be .52. In all cases, interra
ter reliability is lower than intrarater reliability, indicating that
the inappropriate use of intrarater reliability estimates to correct f
or biases from measurement error leads to biased research results. The
se findings have important implications for both research and practice
.