Factor analysis, principal component analysis, mixtures of gaussian cluster
s, vector quantization, Kalman filter models, and hidden Markov models can
all be unified as variations of unsupervised learning under a single basic
generative model. This is achieved by collecting together disparate observa
tions and derivations made by many previous authors and introducing a new w
ay of linking discrete and continuous state models using a simple nonlinear
ity. Through the use of other nonlinearities, we show how independent compo
nent analysis is also a variation of the same basic generative model. We sh
ow that factor analysis and mixtures of gaussians can be implemented in aut
oencoder neural networks and learned using squared error plus the same regu
larization term. We introduce a new model for static data, known as sensibl
e principal component analysis, as well as a novel concept of spatially ada
ptive observation noise. We also review some of the literature involving gl
obal and local mixtures of the basic models and provide pseudocode for infe
rence and learning for all the basic models.