ISCA Archive Interspeech 2014
ISCA Archive Interspeech 2014

Audio-visual signal processing in a multimodal assisted living environment

Alexey Karpov, Lale Akarun, Hülya Yalçın, Alexander Ronzhin, Barış Evrim Demiröz, Aysun Çoban, Miloš Železný

In this paper, we present some novel methods and applications for audio and video signal processing for a multimodal environment of an assisted living smart space. This intelligent environment was developed during the 7th Summer Workshop on Multimodal Interfaces eNTERFACE. It integrates automatic systems for audio and video-based monitoring and user tracking in the smart space. In the assisted living environment, users are tracked by some omnidirectional video cameras, as well as speech and non-speech audio events are recognized by an array of microphones. The multiple objects tracking precision (MOTP) of the developed video monitoring system was 0.78 and 0.73 and the multiple objects tracking accuracy (MOTA) was 62.81% and 72.31% for single person and three people scenarios, respectively. The recognition accuracy of the proposed multilingual speech and audio events recognition system was 96.5% and 93.8% for user's speech commands and non-speech acoustic events, correspondingly. The design of the assisted living environment, the certain test scenarios and the process of audio-visual database collection are described in the paper.

doi: 10.21437/Interspeech.2014-267

Cite as: Karpov, A., Akarun, L., Yalçın, H., Ronzhin, A., Demiröz, B.E., Çoban, A., Železný, M. (2014) Audio-visual signal processing in a multimodal assisted living environment. Proc. Interspeech 2014, 1023-1027, doi: 10.21437/Interspeech.2014-267

  author={Alexey Karpov and Lale Akarun and Hülya Yalçın and Alexander Ronzhin and Barış Evrim Demiröz and Aysun Çoban and Miloš Železný},
  title={{Audio-visual signal processing in a multimodal assisted living environment}},
  booktitle={Proc. Interspeech 2014},