ISCA Archive eINTERFACE 2006
ISCA Archive eINTERFACE 2006

Multimodal signal processing and interaction for a driving simulator: component-based architecture

Alexandre Benoit, L. Bonnaud, Alice Caplier, Y. Damousis, D. Tzovaras, F. Jourde, L. Nigay, M. Serrano, L. Lawson

After a first workshop at eNTERFACE 2005 focusing on developing video-based modalities for an augmented driving simulator, this project aims at designing and developing a multimodal driving simulator that is based on both multimodal driver's focus of attention detection as well as driver's fatigue state detection and prediction. Capturing and interpreting the driver's focus of attention and fatigue state will be based on video data (e.g., facial expression, head movement, eye tracking). While the input multimodal interface relies on passive modalities only (also called attentive user interface), the output multimodal user interface includes several active output modalities for presenting alert messages including graphics and text on a mini-screen and in the windshield, sounds, speech and vibration (vibration wheel). Active input modalities are added in the meta-User Interface to let the user dynamically select the output modalities. The driving simulator is used as a case study for studying software architecture for multimodal signal processing and multimodal interaction using two software component-based platforms, OpenInterface and ICARE.

Index Terms— Attention level, Component, Driving simulator, Facial movement analysis, ICARE, Interaction modality, OpenInterface, Software architecture, Multimodal interaction.