ISCA Archive Interspeech 2024
ISCA Archive Interspeech 2024

Preprocessing for acoustic-to-articulatory inversion using real-time MRI movies of Japanese speech

Anna Oura, Hideaki Kikuchi, Tetsunori Kobayashi

Acoustic-to-articulatory inversion (AAI) estimates the articulatory movements by using acoustic speech signals. The traditional AAI relies on indirect estimation using articulatory models. However, recent advancements have proposed the use of machine learning models to directly output real-time MRI (rtMRI) movies. This study applied the existing model to rtMRI movies of Japanese speech to test its potential for achieving highly accurate estimations using the devised preprocessing methods. Preprocessing involves normalization of face alignment and filtering to remove extraneous regions. For objective evaluation, we measured the complex wavelet structural similarity (CW--SSIM). The results indicate that combining the normalization and filtering processes can produce smooth rtMRI movies that closely resemble the original (average CW--SSIM: LSTM, 0.795; BLSTM, 0.793). Therefore, the effectiveness of the preprocessing was demonstrated.