ISCA Archive Interspeech 2023
ISCA Archive Interspeech 2023

ProsAudit, a prosodic benchmark for self-supervised speech models

Maureen de Seyssel, Marvin Lavechin, Hadrien Titeux, Arthur Thomas, Gwendal Virlet, Andrea Santos Revilla, Guillaume Wisniewski, Bogdan Ludusan, Emmanuel Dupoux

We present ProsAudit, a benchmark in English to assess structural prosodic knowledge in self-supervised learning (SSL) speech models. It consists of two subtasks, their corresponding metrics, and an evaluation dataset. In the protosyntax task, the model must correctly identify strong versus weak prosodic boundaries. In the lexical task, the model needs to correctly distinguish between pauses inserted between words and within words. We also provide human evaluation scores on this benchmark. We evaluated a series of SSL models and found that they were all able to perform above chance on both tasks, even when evaluated on an unseen language. However, non-native models performed significantly worse than native ones on the lexical task, highlighting the importance of lexical knowledge in this task. We also found a clear effect of size with models trained on more data performing better in the two subtasks.


doi: 10.21437/Interspeech.2023-438

Cite as: de Seyssel, M., Lavechin, M., Titeux, H., Thomas, A., Virlet, G., Revilla, A.S., Wisniewski, G., Ludusan, B., Dupoux, E. (2023) ProsAudit, a prosodic benchmark for self-supervised speech models. Proc. INTERSPEECH 2023, 2963-2967, doi: 10.21437/Interspeech.2023-438

@inproceedings{deseyssel23_interspeech,
  author={Maureen {de Seyssel} and Marvin Lavechin and Hadrien Titeux and Arthur Thomas and Gwendal Virlet and Andrea Santos Revilla and Guillaume Wisniewski and Bogdan Ludusan and Emmanuel Dupoux},
  title={{ProsAudit, a prosodic benchmark for self-supervised speech models}},
  year=2023,
  booktitle={Proc. INTERSPEECH 2023},
  pages={2963--2967},
  doi={10.21437/Interspeech.2023-438},
  issn={2308-457X}
}