The article addresses the issue of intercoder reliability in meta-anal
yses. The current practice of reporting a single, mean intercoder agre
ement score in meta-analytic research leads to systematic bias and ove
restimates the true reliability. An alternative approach is recommende
d in which average intercoder agreement scores or other reliability st
atistics are calculated within clusters of coded variables. These clus
ters form a hierarchy in which the correctness of coding decisions at
a given level of the hierarchy is contingent on decisions made at high
er levels. Two separate studies of intercoder agreement in meta-analys
is are presented to assess the validity of the model.