Background. It is reasonable to propose that competence is a multifaceted c
haracteristic defined in part by same minimum level of knowledge and skill,
In this study we examined the relationship between surgical faculty's judg
ment of clinical competence, as measured by a surgical resident objective s
tructured clinical examination (OSCE), and the residents" objective perform
ance on the skills being tested.
Methods. Fifty-six general surgery residents at all levels of training part
icipated in a 30-station OSCE. At the completion of each station, the facul
ty proctor made several overall judgments regarding each resident's perform
ance, including a global judgment of competent or not competent. The compet
ence judgment was applied to the objective percentage performance score in
three different ways to construct methods for determining competence based
solely upon this objective percentage score.
Results. The average mean competent score (MCS) across the stations was 61%
, and the average mean noncompetent score (MNCS) was 38%. The difference be
tween MCS and MNCS for each station was very consistent. Upper threshold sc
ores above which a judgment of competent was always made, and lower thresho
ld scores below which a judgment of noncompetent was always made were obser
ved. Overall, the average mean and threshold scores for competent and nonco
mpetent groups were remarkably similar. For performance scores in the range
between the threshold competent and noncompetent scores at each station, m
easures other than objective performance on the skills being evaluated dete
rmined the judgment of competent or not competent.
Conclusions. Empirically determined minimum acceptable standards for object
ive performance in clinical skills and knowledge appeared to have been subc
onsciously applied to the competence judgment by the faculty evaluators in
this study. Other factors appeared to have become determinate when the obje
ctive performance score fell within a range of uncertainty. (C) 1999 Academ
ic Press.