ISCA Archive SIGUL 2023
ISCA Archive SIGUL 2023

Multilingual Models with Language Embeddings for Low-resource Speech Recognition

Léa-Marie Lam-Yee-Mui, Waad Ben Kheder, Viet-Bac Le, Claude Barras, Jean-Luc Gauvain

Speech recognition for low-resource languages remains challenging and can be addressed with techniques such as multi- lingual modeling and transfer learning. In this work, we explore several solutions to the multilingual training problem: training monolingual models with multilingual features, adapting a multilingual model with transfer learning and using language em- beddings as additional features. To develop practical solutions we focus our work on medium size hybrid ASR models. The multilingual models are trained on 270 hours of iARPA Babel data from 25 languages, and results are reported on 4 Babel languages for the Limited Language Pack (LLP) condition. The results show that adapting a multilingual acoustic model with language embeddings is an effective solution, outperforming the baseline monolingual models, and providing comparable results to models based on state-of-the-art XLSR-53 features but with the advantage of needing 15 times fewer parameters.


doi: 10.21437/SIGUL.2023-18

Cite as: Lam-Yee-Mui, L.-M., Ben Kheder, W., Le, V.-B., Barras, C., Gauvain, J.-L. (2023) Multilingual Models with Language Embeddings for Low-resource Speech Recognition. Proc. 2nd Annual Meeting of the ELRA/ISCA SIG on Under-resourced Languages (SIGUL 2023), 83-87, doi: 10.21437/SIGUL.2023-18

@inproceedings{lamyeemui23_sigul,
  author={Léa-Marie Lam-Yee-Mui and Waad {Ben Kheder} and Viet-Bac Le and Claude Barras and Jean-Luc Gauvain},
  title={{Multilingual Models with Language Embeddings for Low-resource Speech Recognition}},
  year=2023,
  booktitle={Proc. 2nd Annual Meeting of the ELRA/ISCA SIG on Under-resourced Languages (SIGUL 2023)},
  pages={83--87},
  doi={10.21437/SIGUL.2023-18}
}