Cc. Hoffman et al., So many jobs, so little "N": Applying expanded validation models to support generalization of cognitive test validity, PERS PSYCH, 53(4), 2000, pp. 955-991
This paper describes a case study in which practitioners were faced with th
e challenge of validating cognitive ability tests in a setting where additi
onal criterion-related validation research was not technically feasible. Re
search conducted within this organization had reached the point of diminish
ing returns because most of the "large incumbent" jobs had already been the
subject of validation research, and the remaining jobs had relatively few
incumbents. Landy (1986), and more recently, Binning and Barrett (1989), ch
aracterized validation as the process of accumulating a variety of forms of
judgmental and empirical evidence to support inferences regarding psycholo
gical constructs and operational measures of those constructs. The convergi
ng lines of evidence brought together in this study by the synthesis of dat
a from externally conducted VG research, internal validation studies, test
transportability, job component validity, and analysis of attributes requir
ements support inferences regarding the validity of cognitive ability tests
for predicting training and job performance for company nonmanagement jobs
. This study demonstrates the soundness and practicality of the advice that
Landy and Binning and Barrett provided regarding validity models. Although
this study does not fit neatly into any one of the three "boxes" (Landy, 1
986) the Guidelines allow in supporting validation efforts, it is likely mo
re defensible than if we had followed Guidelines prescriptions by rote. The
interlinking systems of job families and test batteries described here and
in Hoffman (1999) are also responsive to company needs regarding cost cont
ainment and quick implementation of staffing systems.