Evaluation of competence in the interpretation of chest radiographs

Citation
Pn. Cascade et al., Evaluation of competence in the interpretation of chest radiographs, ACAD RADIOL, 8(4), 2001, pp. 315-321
Citations number
18
Categorie Soggetti
Radiology ,Nuclear Medicine & Imaging
Journal title
ACADEMIC RADIOLOGY
ISSN journal
10766332 → ACNP
Volume
8
Issue
4
Year of publication
2001
Pages
315 - 321
Database
ISI
SICI code
1076-6332(200104)8:4<315:EOCITI>2.0.ZU;2-3
Abstract
Rationale and Objectives. The purpose of this study was to determine relati ve rates of missed diagnoses for radiologists as a measure of competence in interpreting chest radiographs. Materials and Methods. Cases involving differing interpretations of chest r adiographs were collected from January 1994 through December 1999 by facult y (chest and nonchest radiology specialists) in an academic radiology depar tment. A quarterly peer-review process designated cases months after the fa ct, and anonymously, as no miss or as class I (nondiagnosable), class II (v ery difficult diagnosis), class III (should be diagnosed most of time), or class IV (should almost always be diagnosed) missed diagnoses. The rates an d classes of missed diagnoses were compared among chest faculty and for the nonchest radiology specialists as a group. Results. Chest radiologists read 184,977 studies, and nonchest radiologists read 300,684 studies. Of these, 243 missed diagnoses were classified (clas ses I and II, 184 cases; class III, 50; and class IV, nine). No difference was detected in the rate of class III and IV misses among chest faculty, bu t nonchest faculty had significantly more class III (P = .022) and class IV misses (P = .016). Conclusion. Random sampling of differing interpretations can yield a relati ve rate of missed diagnoses for radiologists. No difference was detected in clinically important misses (ie, classes III and IV) among chest radiologi sts, but a statistically significantly higher rate of seemingly obvious mis diagnoses was found for nonchest specialty radiologists. Potential biases m ay have influenced this analysis, including disease prevalence, sampling, c linical factors, observer variability, and truth-in-diagnosis.