Be. Clauser et al., DEVELOPMENT OF AUTOMATED SCORING ALGORITHMS FOR COMPLEX PERFORMANCE ASSESSMENTS - A COMPARISON OF 2 APPROACHES, Journal of educational measurement, 34(2), 1997, pp. 141-161
Performance assessments are typically scored by having experts rate in
dividual performances. The cast associated with using expert raters ma
y represent a serious limitation in many large-scale testing programs.
The use of raters may also introduce an additional source of error in
to the assessment. These limitations have motivated development of aut
omated scoring systems for performance assessments. Preliminary resear
ch has shown these systems to have application across a variety of tas
ks ranging from simple mathematics to architectural problem solving. T
his study extends research on automated scoring by comparing alternati
ve automated systems for scoring a computer simulation test of physici
ans' patient management skills; one system uses regression-derived wei
ghts for components of the performance, the other uses complex rules t
o map performances into score levels. The procedures are evaluated by
comparing the resulting scores to expert ratings of the same performan
ces.