The main purposes of the present article are (a) to exemplify a link b
etween Fleiss's multiple-rater kappa or other analogues and the genera
lizability (G) coefficient for a single facet design, (b) to explore t
he possible utility and interpretation of G theory in the study of int
errater agreement when the data are measured on a nominal scale, and (
c) to explain why the G coefficient is preferred in place of the kappa
coefficient and its derived forms.