Probabilistic inference is an important technique for reasoning under
uncertainty in such areas as medicine, software fault diagnosis, speec
h recognition, and automated vision. Although it could contribute to m
any more applications, probabilistic inference is extremely computatio
nally intensive, making it impractical for applications that involve l
arge databases. One way to address this problem is to take advantage o
f the technique's available parallelism. The authors evaluated the eff
ectiveness of doing probabilistic inference in parallel. They found th
at parallel probabilistic inference presents interesting tradeoffs bet
ween load balance and data locality. These factors are key to successf
ul parallel applications and yet are often difficult to optimize. The
authors attempted to find the optimal trade off by writing two paralle
l programs-static and dynamic- to exploit different forms of paralleli
sm available in probabilistic inference. Both programs were tested on
a 32-processor Stanford Dash and a 16-processor SGI Challenge XL, usin
g a six medium belief networks to evaluate the programs. In a series o
f experiments and analyses, the results were evaluated to see how comp
utation time was used and how data locality affected performance. The
authors then tested the static program using a large medical diagnosis
network. The static program, which maximizes data locality, out-perfo
rmed the dynamic program. It also reduced the time probabilistic infer
ence takes on the large medical network. The results suggest that main
taining good data locality is crucial for obtaining good speedups and
that the speedups attained will depend on the network's structure and
size.