MONITORING EXPERT-SYSTEM PERFORMANCE USING CONTINUOUS USER FEEDBACK

Citation
Mg. Kahn et al., MONITORING EXPERT-SYSTEM PERFORMANCE USING CONTINUOUS USER FEEDBACK, Journal of the American Medical Informatics Association, 3(3), 1996, pp. 216-223
Citations number
12
Categorie Soggetti
Information Science & Library Science","Computer Science Information Systems","Information Science & Library Science","Medical Informatics
ISSN journal
10675027
Volume
3
Issue
3
Year of publication
1996
Pages
216 - 223
Database
ISI
SICI code
1067-5027(1996)3:3<216:MEPUCU>2.0.ZU;2-1
Abstract
Objective: To evaluate the applicability of metrics collected during r outine use to monitor the performance of a deployed expert system. Met hods: Two extensive formal evaluations of the GermWatcher (Washington University school of Medicine) expert system were performed approximat ely six months apart. Deficiencies noted during the first evaluation w ere corrected via a series of interim changes to the expert system rul es, even though the expert system was in routine use. As part of their daily work routine, infection control nurses reviewed expert system o utput and changed the output results with which they disagreed. The ra te of nurse disagreement with expert system output was used as an indi rect or surrogate metric of expert system performance between formal e valuations. The results of the second evaluation were used to validate the disagreement rate as an indirect performance measure. Based on co ntinued monitoring of user feedback, expert-system changes incorporate d after the second formal evaluation have resulted in additional impro vements in performance.Results: The rate-of nurse disagreement with Ge rmWatcher output decreased consistently after each change to the progr am. The second formal evaluation confirmed a marked improvement in the program's performance, justifying the use of the nurses' disagreement rate as an indirect performance metric. Conclusions: Metrics collecte d during the routine use of the GermWatcher expert system can be used to monitor the performance of the expert system. The impact of improve ments to the program can be followed using continuous user feedback wi thout requiring extensive formal evaluations after each modification. When possible, the design of an expert system should incorporate measu res-of system performance that can be collected and monitored during t he routine use of the system.