Part of Advances in Neural Information Processing Systems 10 (NIPS 1997)
Lawrence Saul, Mazin Rahim
Hidden Markov models (HMMs) for automatic speech recognition rely on high dimensional feature vectors to summarize the short(cid:173) time properties of speech. Correlations between features can arise when the speech signal is non-stationary or corrupted by noise. We investigate how to model these correlations using factor analysis, a statistical method for dimensionality reduction . Factor analysis uses a small number of parameters to model the covariance struc(cid:173) ture of high dimensional data. These parameters are estimated by an Expectation-Maximization (EM) algorithm that can be em(cid:173) bedded in the training procedures for HMMs. We evaluate the combined use of mixture densities and factor analysis in HMMs that recognize alphanumeric strings. Holding the total number of parameters fixed, we find that these methods, properly combined, yield better models than either method on its own.