Probabilistic inference and maximum a posteriori (MAP) explanation are two
important and related problems on Bayesian belief networks. Both problems a
re known to be NP-hard for both approximation and exact solution. In 1997,
Dagum and Luby showed that efficiently approximating probabilistic inferenc
e is possible for belief networks in which all probabilities are bounded aw
ay from 0. In this paper, we show that the corresponding result for MAP exp
lanation does not hold: finding, or approximating, MAPs for belief networks
remains NP-hard for belief networks with probabilities bounded within the
range [l, u] for any 0 less than or equal to l < 0.5 < u less than or equal
to 1. Our results cover both deterministic and randomized approximation. (
C) 2000 Elsevier Science B.V. All rights reserved.