Faculty evaluation by residents in an emergency medicine program: A new evaluation instrument

Citation
Ip. Steiner et al., Faculty evaluation by residents in an emergency medicine program: A new evaluation instrument, ACAD EM MED, 7(9), 2000, pp. 1015-1021
Citations number
48
Categorie Soggetti
Aneshtesia & Intensive Care
Journal title
ACADEMIC EMERGENCY MEDICINE
ISSN journal
10696563 → ACNP
Volume
7
Issue
9
Year of publication
2000
Pages
1015 - 1021
Database
ISI
SICI code
1069-6563(200009)7:9<1015:FEBRIA>2.0.ZU;2-V
Abstract
Objective: Evaluation of preceptors in training programs is essential; howe ver, little research has been performed in the setting of the emergency dep artment (ED). The goal of this pilot study was to determine the validity an d reliability of a faculty evaluation instrument-the Emergency Rotation (ER ) scale-developed specifically for use in emergency medicine (EM). Methods: A prospective study comparing the ER scale with two alternative faculty ev aluation instruments was completed in three of the five EDs affiliated with an EM teaching program, where emergency physicians are members of the clin ical teaching faculty. The participants were 18 residents (postgraduate yea rs 1, 2, and 3) who were completing four-week clinical rotations in EM. Res idents at the end of the rotation recorded their evaluations of each emerge ncy physician with whom they had clinical encounters on the following evalu ation tools: the ER scale, a longer validated scale (Irby), and a global as sessment scale (GAS). Domain scores were correlated with the previously val idated scale and the GAS to determine validity using a multitrait-multimeth od matrix. The reliability of the ER scale was measured using a Chronbach's alpha coefficient. Results: Forty-eight preceptor evaluations were complet ed on 29 individual preceptors. The rating of preceptors was high using the ER scale (median: 16 of 20; IQR: 13, 18), Irby (median: 300 of 378; IQR: 2 67, 321), or GAS (mean: 7.8 of 10; SD: 1.3). Domain scores for each tool we re used in the multitrait-multimethod matrix and the correlations between a previously validated tool and the ER scale were found to be high (>0.70) i n the various domains. The internal consistency of the ER scale was also hi gh (r = 0.85). Conclusions: The ER scale appears to be valid and reliable. It performs well when compared with previously psychometrically tested tool s. It is a sensible, well-adapted tool for the teaching environment offered by EM.