LEARNING LOOPY GRAPHICAL MODELS WITH LATENT VARIABLES: EFFICIENT METHODS AND GUARANTEES

Citation
Animashree Anandkumar et Ragupathyraj Valluvan, LEARNING LOOPY GRAPHICAL MODELS WITH LATENT VARIABLES: EFFICIENT METHODS AND GUARANTEES, Annals of statistics , 41(2), 2013, pp. 401-435
Journal title
ISSN journal
00905364
Volume
41
Issue
2
Year of publication
2013
Pages
401 - 435
Database
ACNP
SICI code
Abstract
The problem of structure estimation in graphical models with latent variables is considered. We characterize conditions for tractable graph estimation and develop efficient methods with provable guarantees. We consider models where the underlying Markov graph is locally tree-like, and the model is in the regime of correlation decay. For the special case of the Ising model, the number of samples n required for structural consistency of our method scales as $\mathrm{n}=\mathrm{\Omega }({\mathrm{\theta }}_{\mathrm{min}}^{-\mathrm{\delta }\mathrm{\eta }(\mathrm{\eta }+1)-2}\mathrm{log}\mathrm{p})$ , where p is the number of variables, . min is the minimum edge potential, . is the depth (i.e., distance from a hidden node to the nearest observed nodes), and . is a parameter which depends on the bounds on node and edge potentials in the Ising model. Necessary conditions for structural consistency under any algorithm are derived and our method nearly matches the lower bound on sample requirements. Further, the proposed method is practical to implement and provides flexibility to control the number of latent variables and the cycle lengths in the output graph.