Pl. Rogers et al., Quantifying learning in medical students during a critical care medicine elective: A comparison of three evaluation instruments, CRIT CARE M, 29(6), 2001, pp. 1268-1273
Objective: To compare three different evaluative instruments and determine
which is able to measure different aspects of medical student learning.
Design: Student learning was evaluated by using written examinations, objec
tive structured clinical examination, and patient simulator that used two c
linical scenarios before and after a structured critical care elective, by
using a crossover design.
Participation: Twenty-four 4th-yr students enrolled in the critical care me
dicine elective.
Interventions: All students took a multiple-choice written examination; eva
luated a live simulated critically in patient, requested data from a nurse,
and intervened as appropriate at different stations (objective structured
clinical examination); and evaluated the computer-controlled patient simula
tor and intervened as appropriate.
Measurements and Main Results: Students' knowledge was assessed by using a
multiple-choice examination containing the same data incorporated into the
other examinations. Student performance on the objective structured clinica
l examination was evaluated at five stations. Both objective structured cli
nical examination and simulator tests were videotaped for subsequent scores
of responses, quality of responses, and response time. The videotapes were
reviewed for specific behaviors by faculty masked to time of examination.
Students were expected to perform the following: a) assess airway, breathin
g, and circulation; b) prepare a mannequin for intubation; c) provide appro
priate ventilator settings; d) manage hypotension; and e) request, interpre
t, and provide appropriate intervention for pulmonary artery catheter data.
Students were expected to perform identical behaviors during the simulator
examination; however, the entire examination was performed on the whole-bo
dy computer-controlled mannequin. The primary outcome measure was the diffe
rence in examination scores before and after the rotation. The mean preelec
tive scores were 77 +/- 16%, 47 +/- 15%, and 41 +/- 14% for the written exa
mination, objective structured clinical examination, and simulator, respect
ively, compared with 89 +/- 11%, 76 +/- 12%, and 62 +/- 15% after the elect
ive (p < .0001). Prerotation scores for the written examination were signif
icantly higher than the objective structured clinical examination or the si
mulator; postrotation scores were highest for the written examination and l
owest for the simulator. Conclusion: Written examinations measure acquisiti
on of knowledge but fail to predict if students can apply knowledge to prob
lem solving, whereas both the objective structured clinical examination and
the computer-controlled patient simulator can be used as effective perform
ance evaluation tools.