ISCA Archive Interspeech 2023
ISCA Archive Interspeech 2023

Silent Speech Recognition with Articulator Positions Estimated from Tongue Ultrasound and Lip Video

Rachel Beeson, Korin Richmond

We present a multi-speaker silent speech recognition system trained on articulator features derived from the Tongue and Lips corpus, a multi-speaker corpus of ultrasound tongue imaging and lip video data. We extracted articulator features using the pose estimation software DeepLabCut, then trained recognition models with these point-tracking features using Kaldi. We trained with voiced utterances, then tested performance on both voiced and silent utterances. Our multi-speaker SSR improved WER by 23.06% when compared to a previous similar multi-speaker SSR system which used image-based instead of point-tracking features. We also found great improvements (up to 15.45% decrease in WER) in recognition of silent speech using fMLLR adaptation compared to raw features. Finally, we investigated differences in articulator trajectories between voiced and silent speech and found that speakers tend to miss articulatory targets that are present in voiced speech when speaking silently.