This paper applies the conceptual work of K. Kraiger, J. K. Ford, and
E. Salas (1993) to the evaluation of two training programs. A method k
nown as structural assessment (SA) was described and adapted for use i
n the evaluation of a training program for computer programming and a
PC-based simulation of a naval decision-making task. SA represents and
evaluates pairwise judgments of relatedness of concepts drawn from th
e training content domain. In the first study, SA scores of students (
determined by similarity to an expert solution) were significantly hig
her after training than before but did not predict performance on a ta
ke-home exam 12 weeks later. In the second study, we manipulated train
ing content by providing half the students with the goals and objectiv
es of the transfer task (an advance organizer) before training and pro
viding the other half with the same information after training. As hyp
othesized, SA scores were higher for those receiving the organizers be
fore training; SA scores were also more strongly related to performanc
e on the criterion task for this group. Implications of the results fo
r training evaluation are discussed.