Gw. Moore et Jj. Berman, PERFORMANCE ANALYSIS OF MANUAL AND AUTOMATED SYSTEMATIZED NOMENCLATURE OF MEDICINE (SNOMED) CODING, American journal of clinical pathology, 101(3), 1994, pp. 253-256
Many pathology departments rely on the accuracy of computer-generated
diagnostic coding for surgical specimens. At present, there are no pub
lished guidelines to assure the quality of coding devices. To assess t
he performance of systemized nomenclature of medicine (SNOMED) coding
software, manual coding was compared with automated coding in 9353 con
secutive surgical pathology reports at the Baltimore Veterans Affairs
Medical Center. Manual SNOMED coding produced 13,454 morphologic codes
comprising 519 distinct codes; 209 were unique codes (assigned to onl
y one report apiece). Automated coding obtained 23,744 morphologic cod
es comprising 498 distinct codes, of which 129 were unique codes. Only
44 (.5%) instances were found in which automated coding missed key di
agnoses on surgical case reports. Thus, automated coding compared favo
rably with manual coding. To achieve the maximum performance, departme
nts should monitor the output from automatic coders. Modifications in
reporting style, code dictionaries, and coding algorithms can lead to
improved coding performance.