Gj. Meyer, Simple procedures to estimate chance agreement and kappa for the interrater reliability of response segments using the Rorschach comprehensive system, J PERS ASSE, 72(2), 1999, pp. 230-255
When determining interrater reliability for scoring the Rorschach Comprehen
sive System(Exner, 1993), researchers often report coding agreement for res
ponse segments (i.e., Location, Developmental Quality, Determinants, etc.),
Currently, however, it is difficult to calculate kappa coefficients for th
ese segments because it is tedious to generate the chance agreement rates r
equired for kappa computations. This study facilitated kappa calculations f
or response segments by developing and validating formulas to estimate chan
ce agreement. Formulas were developed for II segments using 400 samples, cr
oss-validated on 100 samples, and applied to the data from 5 reliability st
udies. On cross-validation, the validity of the prediction formulas ranged
from .93 to 1.0(M = .98). In the 5 reliability studies, the average differe
nce between estimated and actual chance agreement rates was .00048 and the
average difference between estimated and;actual kappa values was .00011 (ma
ximum = .0052). Thus, the regression formulas quite accurately predicted ch
ance agreement rates and kappa coefficients for response segments.