ISCA Archive Interspeech 2014
ISCA Archive Interspeech 2014

Efficient GPU-based training of recurrent neural network language models using spliced sentence bunch

X. Chen, Y. Wang, X. Liu, Mark J. F. Gales, Philip C. Woodland

Recurrent neural network language models (RNNLMs) are becoming increasingly popular for a range of applications including speech recognition. However, an important issue that limits the quantity of data, and hence their possible application areas, is the computational cost in training. A standard approach to handle this problem is to use class-based outputs, allowing systems to be trained on CPUs. This paper describes an alternative approach that allows RNNLMs to be efficiently trained on GPUs. This enables larger quantities of data to be used, and networks with an unclustered, full output layer to be trained. To improve efficiency on GPUs, multiple sentences are “spliced” together for each mini-batch or “bunch” in training. On a large vocabulary conversational telephone speech recognition task, the training time was reduced by a factor of 27 over the standard CPU-based RNNLM toolkit. The use of an unclustered, full output layer also improves perplexity and recognition performance over class-based RNNLMs.


doi: 10.21437/Interspeech.2014-163

Cite as: Chen, X., Wang, Y., Liu, X., Gales, M.J.F., Woodland, P.C. (2014) Efficient GPU-based training of recurrent neural network language models using spliced sentence bunch. Proc. Interspeech 2014, 641-645, doi: 10.21437/Interspeech.2014-163

@inproceedings{chen14b_interspeech,
  author={X. Chen and Y. Wang and X. Liu and Mark J. F. Gales and Philip C. Woodland},
  title={{Efficient GPU-based training of recurrent neural network language models using spliced sentence bunch}},
  year=2014,
  booktitle={Proc. Interspeech 2014},
  pages={641--645},
  doi={10.21437/Interspeech.2014-163},
  issn={2308-457X}
}