Tomohiro Nakatani, Masato Miyoshi, Keisuke Kinoshita
Speech dereverberation is desirable with a view to achieving, for exam- ple, robust speech recognition in the real world. However, it is still a chal- lenging problem, especially when using a single microphone. Although blind equalization techniques have been exploited, they cannot deal with speech signals appropriately because their assumptions are not satisﬁed by speech signals. We propose a new dereverberation principle based on an inherent property of speech signals, namely quasi-periodicity. The present methods learn the dereverberation ﬁlter from a lot of speech data with no prior knowledge of the data, and can achieve high quality speech dereverberation especially when the reverberation time is long.