ISCA Archive Interspeech 2014
ISCA Archive Interspeech 2014

An evaluation of unsupervised acoustic model training for a dysarthric speech interface

Oliver Walter, Vladimir Despotovic, Reinhold Haeb-Umbach, Jort F. Gemmeke, Bart Ons, Hugo Van hamme

In this paper, we investigate unsupervised acoustic model training approaches for dysarthric-speech recognition. These models are first, frame-based Gaussian posteriorgrams, obtained from Vector Quantization (VQ), second, so-called Acoustic Unit Descriptors (AUDs), which are hidden Markov models of phone-like units, that are trained in an unsupervised fashion, and, third, posteriorgrams computed on the AUDs. Experiments were carried out on a database collected from a home automation task and containing nine speakers, of which seven are considered to utter dysarthric speech. All unsupervised modeling approaches delivered significantly better recognition rates than a speaker-independent phoneme recognition baseline, showing the suitability of unsupervised acoustic model training for dysarthric speech. While the AUD models led to the most compact representation of an utterance for the subsequent semantic inference stage, posteriorgram-based representations resulted in higher recognition rates, with the Gaussian posteriorgram achieving the highest slot filling F-score of 97.02%.


doi: 10.21437/Interspeech.2014-265

Cite as: Walter, O., Despotovic, V., Haeb-Umbach, R., Gemmeke, J.F., Ons, B., Van hamme, H. (2014) An evaluation of unsupervised acoustic model training for a dysarthric speech interface. Proc. Interspeech 2014, 1013-1017, doi: 10.21437/Interspeech.2014-265

@inproceedings{walter14_interspeech,
  author={Oliver Walter and Vladimir Despotovic and Reinhold Haeb-Umbach and Jort F. Gemmeke and Bart Ons and Hugo {Van hamme}},
  title={{An evaluation of unsupervised acoustic model training for a dysarthric speech interface}},
  year=2014,
  booktitle={Proc. Interspeech 2014},
  pages={1013--1017},
  doi={10.21437/Interspeech.2014-265},
  issn={2308-457X}
}