The kappa coefficient measures chance-corrected agreement between two
observers in the dichotomous classification of subjects. The marginal
probability of classification by each rater may depend on one or more
confounding variables, however. Failure to account for these confounde
rs may lead to inflated estimates of agreement. A multinomial model is
used that assumes both raters have the same marginal probability of c
lassification, but this probability may depend on one or more covariat
es. The model may be fit using software for conditional logistic regre
ssion. Additionally, likelihood-based confidence intervals for the par
ameter representing agreement may be computed. A simple example is dis
cussed to illustrate model-fitting and application of the technique.