ISCA Archive Interspeech 2012
ISCA Archive Interspeech 2012

Recurrent neural networks for noise reduction in robust ASR

Andrew L. Maas, Quoc V. Le, Tyler M. O'Neil, Oriol Vinyals, Patrick Nguyen, Andrew Y. Ng

Recent work on deep neural networks as acoustic models for automatic speech recognition (ASR) have demonstrated substantial performance improvements. We introduce a model which uses a deep recurrent auto encoder neural network to denoise input features for robust ASR. The model is trained on stereo (noisy and clean) audio features to predict clean features given noisy input. The model makes no assumptions about how noise affects the signal, nor the existence of distinct noise environments. Instead, the model can learn to model any type of distortion or additive noise given sufficient training data. We demonstrate the model is competitive with existing feature denoising approaches on the Aurora2 task, and outperforms a tandem approach where deep networks are used to predict phoneme posteriors directly.

Index Terms: neural networks, robust ASR, deep learning

doi: 10.21437/Interspeech.2012-6

Cite as: Maas, A.L., Le, Q.V., O'Neil, T.M., Vinyals, O., Nguyen, P., Ng, A.Y. (2012) Recurrent neural networks for noise reduction in robust ASR. Proc. Interspeech 2012, 22-25, doi: 10.21437/Interspeech.2012-6

  author={Andrew L. Maas and Quoc V. Le and Tyler M. O'Neil and Oriol Vinyals and Patrick Nguyen and Andrew Y. Ng},
  title={{Recurrent neural networks for noise reduction in robust ASR}},
  booktitle={Proc. Interspeech 2012},