ISCA Archive Interspeech 2021
ISCA Archive Interspeech 2021

Phoneme-to-Audio Alignment with Recurrent Neural Networks for Speaking and Singing Voice

Yann Teytaut, Axel Roebel

Phoneme-to-audio alignment is the task of synchronizing voice recordings and their related phonetic transcripts. In this work, we introduce a new system to forced phonetic alignment with Recurrent Neural Networks (RNN). With the Connectionist Temporal Classification (CTC) loss as training objective, and an additional reconstruction cost, we learn to infer relevant per-frame phoneme probabilities from which alignment is derived. The core of the neural architecture is a context-aware attention mechanism between mel-spectrograms and side information. We investigate two contexts given by either phoneme sequences (model PhAtt) or spectrograms themselves (model SpAtt). Evaluations show that these models produce precise alignments for both speaking and singing voice. Best results are obtained with the model PhAtt, which outperforms baseline reference with an average imprecision of 16.3ms and 29.8ms on speech and singing, respectively. The model SpAtt also appears as an interesting alternative, capable of aligning longer audio files without requiring phoneme sequences on small audio segments.


doi: 10.21437/Interspeech.2021-1676

Cite as: Teytaut, Y., Roebel, A. (2021) Phoneme-to-Audio Alignment with Recurrent Neural Networks for Speaking and Singing Voice. Proc. Interspeech 2021, 61-65, doi: 10.21437/Interspeech.2021-1676

@inproceedings{teytaut21_interspeech,
  author={Yann Teytaut and Axel Roebel},
  title={{Phoneme-to-Audio Alignment with Recurrent Neural Networks for Speaking and Singing Voice}},
  year=2021,
  booktitle={Proc. Interspeech 2021},
  pages={61--65},
  doi={10.21437/Interspeech.2021-1676},
  issn={2958-1796}
}