ISCA Archive Interspeech 2022
ISCA Archive Interspeech 2022

SpeechEQ: Speech Emotion Recognition based on Multi-scale Unified Datasets and Multitask Learning

Zuheng Kang, Junqing Peng, Jianzong Wang, Jing Xiao

Speech emotion recognition (SER) has many challenges, but one of the main challenges is that each framework does not have a unified standard. In this paper, we propose SpeechEQ, a framework for unifying SER tasks based on a multi-scale unified metric. This metric can be trained by Multitask Learning (MTL), which includes two emotion recognition tasks of Emotion States Category (EIS) and Emotion Intensity Scale (EIS), and two auxiliary tasks of phoneme recognition and gender recognition. For this framework, we build a Mandarin SER dataset - SpeechEQ Dataset (SEQD). We conducted experiments on the public CASIA and ESD datasets in Mandarin, which exhibit that our method outperforms baseline methods by a relatively large margin, yielding 8.0% and 6.5% improvement in accuracy respectively. Additional experiments on IEMOCAP with four emotion categories (i.e., angry, happy, sad, and neutral) also show the proposed method achieves a state-of-the-art of both weighted accuracy (WA) of 78.16% and unweighted accuracy (UA) of 77.47%.


doi: 10.21437/Interspeech.2022-11456

Cite as: Kang, Z., Peng, J., Wang, J., Xiao, J. (2022) SpeechEQ: Speech Emotion Recognition based on Multi-scale Unified Datasets and Multitask Learning. Proc. Interspeech 2022, 4745-4749, doi: 10.21437/Interspeech.2022-11456

@inproceedings{kang22d_interspeech,
  author={Zuheng Kang and Junqing Peng and Jianzong Wang and Jing Xiao},
  title={{SpeechEQ: Speech Emotion Recognition based on Multi-scale Unified Datasets and Multitask Learning}},
  year=2022,
  booktitle={Proc. Interspeech 2022},
  pages={4745--4749},
  doi={10.21437/Interspeech.2022-11456},
  issn={2958-1796}
}