Part of Advances in Neural Information Processing Systems 17 (NIPS 2004)
Francis Bach, Michael Jordan
We present an algorithm to perform blind, one-microphone speech sep- aration. Our algorithm separates mixtures of speech without modeling individual speakers. Instead, we formulate the problem of speech sep- aration as a problem in segmenting the spectrogram of the signal into two or more disjoint sets. We build feature sets for our segmenter using classical cues from speech psychophysics. We then combine these fea- tures into parameterized affinity matrices. We also take advantage of the fact that we can generate training examples for segmentation by artifi- cially superposing separately-recorded signals. Thus the parameters of the affinity matrices can be tuned using recent work on learning spectral clustering . This yields an adaptive, speech-specific segmentation al- gorithm that can successfully separate one-microphone speech mixtures.