AN OPTIMAL APPROXIMATION ALGORITHM FOR BAYESIAN-INFERENCE

Authors
Citation
P. Dagum et M. Luby, AN OPTIMAL APPROXIMATION ALGORITHM FOR BAYESIAN-INFERENCE, Artificial intelligence, 93(1-2), 1997, pp. 1-27
Citations number
35
Categorie Soggetti
Computer Sciences, Special Topics","Computer Science Artificial Intelligence",Ergonomics
Journal title
ISSN journal
00043702
Volume
93
Issue
1-2
Year of publication
1997
Pages
1 - 27
Database
ISI
SICI code
0004-3702(1997)93:1-2<1:AOAAFB>2.0.ZU;2-Z
Abstract
Approximating the inference probability Pr[X = x \ E = e] in any sense , even for a single evidence node E, is NP-hard. This result holds for belief networks that are allowed to contain extreme conditional proba bilities-that is, conditional probabilities arbitrarily close to 0. Ne vertheless, all previous approximation algorithms have failed to appro ximate efficiently many inferences, even for belief networks without e xtreme conditional probabilities. We prove that we can approximate eff iciently probabilistic inference in belief networks without extreme co nditional probabilities. We construct a randomized approximation algor ithm-the bounded-variance algorithm-that is a variant of the known lik elihood-weighting algorithm. The bounded-variance algorithm is the fir st algorithm with provably fast inference approximation on all belief networks without extreme conditional probabilities. From the bounded-v ariance algorithm, we construct a deterministic approximation algorith m using current advances in the theory of pseudorandom generators. In contrast to the exponential worst-case behavior of all previous determ inistic approximations, the deterministic bounded-variance algorithm a pproximates inference probabilities in worst-c:ase time that is subexp onential 2((log n)d), for some integer d that is a linear function of the depth of the belief network. (C) 1997 Elsevier Science B.V.