Wb. Long et al., AN EVALUATION OF EXPERT HUMAN AND AUTOMATED ABBREVIATED INJURY SCALE AND ICD-9-CM INJURY CODING, The journal of trauma, injury, infection, and critical care, 36(4), 1994, pp. 499-503
Two hundred ninety-five injury descriptions from 135 consecutive patie
nts treated at a level-I trauma center were coded by three human coder
s (H1, H2, H3) and by TRI-CODE (T), a PC-based artificial intelligence
software program. Two study coders are nationally recognized experts
who teach AIS coding for its developers (the Association for the Advan
cement of Automotive Medicine); the third has 5 years experience in IC
D and AIS coding. A ''correct coding'' (CC) was established for the st
udy injury descriptions. Coding results were obtained for each coder r
elative to the CC. The correct ICD codes were selected in 96% of cases
for H2, 92% for H1, 91% for T, and 86% for H3. The three human coders
agreed on 222 (75%) injuries. The correct 7 digit AIS codes (six iden
tifying digits and the severity digit) were selected in 93% of cases f
or H2, 87% for T, 77% for H3, and 73% for H1. The correct AIS severity
codes (seventh digit only) were selected in 98.3% of cases for H2, 96
.3% for T, 93.9% for H3, and 90.8% for H1. On the basis of the weighte
d kappa statistic TRI-CODE had excellent agreement with the correct co
ding (CC) of AIS severities. Each human coder had excellent agreement
with CC and with TRI-CODE. Coders H1 and H2 were in excellent agreemen
t. Coder H3 was in good agreement with H1 and H2. However, errors amon
g the human coders often occur for different codes, accentuating the v
ariability. We conclude that automated coding can be done as accuratel
y as by human experts, and could allow current coders more time for pr
oper injury abstraction or other trauma registry duties. TRI-CODE will
continue to improve as errors are detected and corrected, requires mi
nimal training, and would significantly increase consistency in statew
ide and national registries.