ISCA Archive Interspeech 2012
ISCA Archive Interspeech 2012

Articulatory feature based multilingual MLPs for low-resource speech recognition

Yanmin Qian, Jia Liu

Large vocabulary continuous speech recognition is particularly difficult for low-resource languages. In the scenario we focus on here is that there is a very limited amount of acoustic training data in the target language, but more plentiful data in other languages. In our approach, we investigate approaches based on Automatic Speech Attribute Transcription (ASAT) framework, and train universal classifiers using multilanguages to learn articulatory features. A hierarchical architecture is applied on both the articulatory feature and phone level, to make the neural network more discriminative. Finally we train the multilayer perceptrons using multi-streams from different languages and obtain MLPs for this low-resource application. In our experiments, we get significant improvements of about 12% relative versus a conventional baseline in this low-resource scenario.

Index Terms: low-resource language; multilayer perceptrons; articulatory features; hierarchical architectures


doi: 10.21437/Interspeech.2012-16

Cite as: Qian, Y., Liu, J. (2012) Articulatory feature based multilingual MLPs for low-resource speech recognition. Proc. Interspeech 2012, 2602-2605, doi: 10.21437/Interspeech.2012-16

@inproceedings{qian12b_interspeech,
  author={Yanmin Qian and Jia Liu},
  title={{Articulatory feature based multilingual MLPs for low-resource speech recognition}},
  year=2012,
  booktitle={Proc. Interspeech 2012},
  pages={2602--2605},
  doi={10.21437/Interspeech.2012-16},
  issn={2958-1796}
}