Lk. Saul et Mg. Rahim, Maximum likelihood and minimum classification error factor analysis for automatic speech recognition, IEEE SPEECH, 8(2), 2000, pp. 115-125
Hidden Markov models (HMM's) for automatic speech recognition rely on high-
dimensional feature vectors to summarize the short-time properties of speec
h. Correlations between features can arise when the speech signal is nonsta
tionary or corrupted by noise. We investigate how to model these correlatio
ns using factor analysis, a statistical method for dimensionality reduction
. Factor analysis uses a small number of parameters to model the covariance
structure of high dimensional data. These parameters can be chosen in two
ways: 1) to maximize the likelihood of observed speech signals, or 2) to mi
nimize the number of classification errors. We derive an expectation-maximi
zation (EM) algorithm for maximum likelihood estimation and a gradient desc
ent algorithm for improved class discrimination. Speech recognizers are eva
luated on two tasks, one small-sized vocabulary (connected alpha digits) an
d one medium-sized vocabulary (New Jersey town names). We find that modelin
g feature correlations by factor analysis leads to significantly increased
likelihoods and word accuracies. Moreover, the rate of improvement with mod
el size often exceeds that observed in conventional HMM's.