Many concepts in sociology are difficult or impossible to objectively
measure. This limitation forces a reliance on subjective measures that
typically contain both systematic and random measurement errors. Syst
ematic errors, or ''biases,'' are the focus of this paper. Campbell an
d Fiske's (1959) multitrait-multimethod (MTMM) research design is the
best known social scientific procedure for uncovering systematic error
s, but the data requirements for classical MTMM designs are too demand
ing for many areas of sociology in which secondary data are the norm.
We show that the benefits of the MTMM design are available under more
relaxed conditions. In addition, we illustrate how researchers can exa
mine the determinants of systematic errors and gain insights into the
potential for confounding or spurious effects caused by systematic err
ors. We demonstrate the usefulness of these methods using the subjecti
ve measures of liberal democracy used in several recent ASR papers and
provide additional examples, including measures of the reputational q
uality of graduate programs and job evaluations for comparable-worth i
nvestigations. We conclude that sociologists can do far more to unders
tand the systematic error present in their subjective variables.