Linear concepts and hidden variables

Authors
Citation
Aj. Grove et D. Roth, Linear concepts and hidden variables, MACH LEARN, 42(1-2), 2001, pp. 123-141
Citations number
15
Categorie Soggetti
AI Robotics and Automatic Control
Journal title
MACHINE LEARNING
ISSN journal
08856125 → ACNP
Volume
42
Issue
1-2
Year of publication
2001
Pages
123 - 141
Database
ISI
SICI code
0885-6125(200101)42:1-2<123:LCAHV>2.0.ZU;2-2
Abstract
We study a learning problem which allows for a "fair" comparison between un supervised learning methods-probabilistic model construction, and more trad itional algorithms that directly learn a classification. The merits of each approach are intuitively clear: inducing a model is more expensive computa tionally, but may support a wider range of predictions. Its performance, ho wever, will depend on how well the postulated probabilistic model fits that data. To compare the paradigms we consider a model which postulates a sing le binary-valued hidden variable on which all other attributes depend. In t his model, finding the most likely value of any one variable (given known v alues for the others) reduces to testing a linear function of the observed values. We learn the model with two techniques: the standard EM algorithm, and a new algorithm we develop based on covariances. We compare these, in a controlled fashion, against an algorithm (a version of Winnow) that attemp ts to find a good linear classifier directly. Our conclusions help delimit the fragility of using a model that is even "slightly" simpler than the dis tribution actually generating the data, vs. the relative robustness of dire ctly searching for a good predictor.