ISCA Archive Interspeech 2014
ISCA Archive Interspeech 2014

Opti-speech: a real-time, 3d visual feedback system for speech training

William Katz, Thomas F. Campbell, Jun Wang, Eric Farrar, J. Coleman Eubanks, Arvind Balasubramanian, Balakrishnan Prabhakaran, Rob Rennaker

We describe an interactive 3D system to provide talkers with real-time information concerning their tongue and jaw movements during speech. Speech movement is tracked by a magnetometer system (Wave; NDI, Waterloo, Ontario, Canada). A customized interface allows users to view their current tongue position (represented as an avatar consisting of flesh-point markers and a modeled surface) placed in a synchronously moving, transparent head. Subjects receive augmented visual feedback when tongue sensors achieve the correct place of articulation. Preliminary data obtained for a group of adult talkers suggest this system can be used to reliably provide real-time feedback for American English consonant place of articulation targets. Future studies, including tests with communication disordered subjects, are described.


doi: 10.21437/Interspeech.2014-298

Cite as: Katz, W., Campbell, T.F., Wang, J., Farrar, E., Eubanks, J.C., Balasubramanian, A., Prabhakaran, B., Rennaker, R. (2014) Opti-speech: a real-time, 3d visual feedback system for speech training. Proc. Interspeech 2014, 1174-1178, doi: 10.21437/Interspeech.2014-298

@inproceedings{katz14_interspeech,
  author={William Katz and Thomas F. Campbell and Jun Wang and Eric Farrar and J. Coleman Eubanks and Arvind Balasubramanian and Balakrishnan Prabhakaran and Rob Rennaker},
  title={{Opti-speech: a real-time, 3d visual feedback system for speech training}},
  year=2014,
  booktitle={Proc. Interspeech 2014},
  pages={1174--1178},
  doi={10.21437/Interspeech.2014-298},
  issn={2308-457X}
}