ISCA Archive Interspeech 2014
ISCA Archive Interspeech 2014

Distributed asynchronous optimization of convolutional neural networks

William Chan, Ian Lane

Recently, deep Convolutional Neural Networks have been shown to outperform Deep Neural Networks for acoustic modelling, producing state-of-the-art accuracy in speech recognition tasks. Convolutional models provide increased model robustness through the usage of pooling invariance and weight sharing across spectrum and time. However, training convolutional models is a very computationally expensive optimization procedure, especially when combined with large training corpora. In this paper, we present a novel algorithm for scalable training of deep Convolutional Neural Networks across multiple GPUs. Our distributed asynchronous stochastic gradient descent algorithm incorporates sparse gradients, momentum and gradient decay to accelerate the training of these networks. Our approach is stable, neither requiring warm-starting or excessively large minibatches. Our proposed approach enables convolutional models to be efficiently trained across multiple GPUs, enabling a model to be scaled asynchronously across 5 GPU workers with ˜68% efficiency.


doi: 10.21437/Interspeech.2014-277

Cite as: Chan, W., Lane, I. (2014) Distributed asynchronous optimization of convolutional neural networks. Proc. Interspeech 2014, 1073-1077, doi: 10.21437/Interspeech.2014-277

@inproceedings{chan14_interspeech,
  author={William Chan and Ian Lane},
  title={{Distributed asynchronous optimization of convolutional neural networks}},
  year=2014,
  booktitle={Proc. Interspeech 2014},
  pages={1073--1077},
  doi={10.21437/Interspeech.2014-277},
  issn={2308-457X}
}