ISCA Archive Interspeech 2022
ISCA Archive Interspeech 2022

Extending RNN-T-based speech recognition systems with emotion and language classification

Zvi Kons, Hagai Aronowitz, Edmilson Morais, Matheus Damasceno, Hong-Kwang Kuo, Samuel Thomas, George Saon

Speech transcription, emotion recognition, and language identification are usually considered to be three different tasks. Each one requires a different model with a different architecture and training process. We propose using a recurrent neural network transducer (RNN-T)-based speech-to-text (STT) system as a common component that can be used for emotion recognition and language identification as well as for speech recognition. Our work extends the STT system for emotion classification through minimal changes, and shows successful results on the IEMOCAP and MELD datasets. In addition, we demonstrate that by adding a lightweight component to the RNN-T module, it can also be used for language identification. In our evaluations, this new classifier demonstrates state-of-the-art accuracy for the NIST-LRE-07 dataset.


doi: 10.21437/Interspeech.2022-10480

Cite as: Kons, Z., Aronowitz, H., Morais, E., Damasceno, M., Kuo, H.-K., Thomas, S., Saon, G. (2022) Extending RNN-T-based speech recognition systems with emotion and language classification. Proc. Interspeech 2022, 546-549, doi: 10.21437/Interspeech.2022-10480

@inproceedings{kons22_interspeech,
  author={Zvi Kons and Hagai Aronowitz and Edmilson Morais and Matheus Damasceno and Hong-Kwang Kuo and Samuel Thomas and George Saon},
  title={{Extending RNN-T-based speech recognition systems with emotion and language classification}},
  year=2022,
  booktitle={Proc. Interspeech 2022},
  pages={546--549},
  doi={10.21437/Interspeech.2022-10480},
  issn={2958-1796}
}