ISCA Archive Interspeech 2014
ISCA Archive Interspeech 2014

Data augmentation for low resource languages

Anton Ragni, Kate M. Knill, Shakti P. Rath, Mark J. F. Gales

Recently there has been interest in the approaches for training speech recognition systems for languages with limited resources. Under the IARPA Babel program such resources have been provided for a range of languages to support this research area. This paper examines a particular form of approach, data augmentation, that can be applied to these situations. Data augmentation schemes aim to increase the quantity of data available to train the system, for example semi-supervised training, multilingual processing, acoustic data perturbation and speech synthesis. To date the majority of work has considered individual data augmentation schemes, with few consistent performance contrasts or examination of whether the schemes are complementary. In this work two data augmentation schemes, semi-supervised training and vocal tract length perturbation, are examined and combined on the Babel limited language pack configuration. Here only about 10 hours of transcribed acoustic data are available. Two languages are examined, Assamese and Zulu, which were found to be the most challenging of the Babel languages released for the 2014 Evaluation. For both languages consistent speech recognition performance gains can be obtained using these augmentation schemes. Furthermore the impact of these performance gains on a down-stream keyword spotting task are also described.

doi: 10.21437/Interspeech.2014-207

Cite as: Ragni, A., Knill, K.M., Rath, S.P., Gales, M.J.F. (2014) Data augmentation for low resource languages. Proc. Interspeech 2014, 810-814, doi: 10.21437/Interspeech.2014-207

  author={Anton Ragni and Kate M. Knill and Shakti P. Rath and Mark J. F. Gales},
  title={{Data augmentation for low resource languages}},
  booktitle={Proc. Interspeech 2014},