Background: The authors wished to determine whether a simulator-based evalu
ation technique assessing clinical performance could demonstrate construct
validity and determine the subjects' perception of realism of the evaluatio
n process.
Methods: Research ethics board approval and informed consent were obtained,
Subjects were 33 university-based anesthesiologists, 46 community-based an
esthesiologists, 23 final-year anesthesiology residents, and 37 fmal-year m
edical students. The simulation involved patient evaluation, induction, and
maintenance of anesthesia. Each problem was scored as follows: no response
to the problem, score = 0; compensating intervention, score = 1; and corre
ctive treatment, score = 2, Examples of problems included atelectasis, coro
nary ischemia, and hypothermia. After the simulation, participants rated th
e realism of their experience on a 10-point visual analog scale (VAS).
Results: After testing for internal consistency, a seven-item scenario rema
ined. The mean proportion scoring correct answers tout of 7) for each group
was as follows: university-based anesthesiologists = 0.53, community-based
anesthesiologists 0.38, residents = 0.54, and medical students = 0.15. The
overall group differences were significant (P < 0.0001), The overall reali
sm VAS score was 7.8. There was no relation between the simulator score and
the realism VAS (R = -0.07, P = 0.41).
Conclusions: The simulation-based evaluation method was able to discriminat
e between practice categories, demonstrating construct validity. Subjects r
ated the realism of the test scenario highly, suggesting that familiarity o
r comfort with the simulation environment had little or no effect on perfor
mance.