ISCA Archive Interspeech 2024
ISCA Archive Interspeech 2024

PARAN: Variational Autoencoder-based End-to-End Articulation-to-Speech System for Speech Intelligibility

Seyun Um, Doyeon Kim, Hong-Goo Kang

Deep learning-based articulation-to-speech (ATS) systems designed for individuals with speech disorders have been extensively researched in recent years. However, conventional methods have faced challenges in representing the transformation in latent space across speech and electromagnetic articulography (EMA) domains, resulting in low speech quality. In this paper, we propose a variational autoencoder (VAE)-based end-to-end ATS model called PARAN that efficiently produces high-fidelity speech from EMA signals. Our model adjusts a prior distribution of latent representations from EMA signals to match a posterior distribution derived from speech utilizing a normalizing flow mechanism. To further enhance the clarity and intelligibility of the synthesized speech, we incorporate an additional loss function aimed at predicting phonetic information from EMA signals. Experimental results demonstrate that our model outperforms previous methods in terms of speech quality and intelligibility.