This paper presents a tutorial introduction to the use of variational metho
ds for inference and learning in graphical models (Bayesian networks and Ma
rkov random fields). We present a number of examples of graphical models, i
ncluding the QMR-DT database, the sigmoid belief network, the Boltzmann mac
hine, and several variants of hidden Markov models, in which it is infeasib
le to run exact inference algorithms. We then introduce variational methods
, which exploit laws of large numbers to transform the original graphical m
odel into a simplified graphical model in which inference is efficient. Inf
erence in the simpified model provides bounds on probabilities of interest
in the original model. We describe a general framework for generating varia
tional transformations based on convex duality. Finally we return to the ex
amples and demonstrate how variational algorithms can be formulated in each
case.