ISCA Archive Eurospeech 1997
ISCA Archive Eurospeech 1997

Real-time lip-tracking for lipreading

Rainer Stiefelhagen, Uwe Meier, Jie Yang

This paper presents a new approach to lip tracking for lipreading. Instead of only tracking features on lips, we propose to track lips along with other facial features such as pupils and nostril. In the new approach, the face is first located in an image using a stochastic skin-color model, the eyes, lip-corners and nostrils are then located and tracked inside the facial region. The new approach can effectively improve the robustness of lip-tracking and simplify automatic detection and recovery of tracking failure. The feasibility of the proposed approach has been demonstrated by implementation of a lip tracking sys- tem. The system has been tested by a database that contains 900 image sequences of different speakers spelling words. The system has successfully extract lip regions from the image sequences to obtain training data for the audio-visual speech recognition system. The system has been also applied to extract the lip region in real-time from live video images to obtain the visual input for an audio-visual speech recognition system. On test sequences we have achieved a reduction of the number of frames with tracking failures by a factor of two using detection and prediction of outliers in the set of found features.

doi: 10.21437/Eurospeech.1997-532

Cite as: Stiefelhagen, R., Meier, U., Yang, J. (1997) Real-time lip-tracking for lipreading. Proc. 5th European Conference on Speech Communication and Technology (Eurospeech 1997), 2007-2010, doi: 10.21437/Eurospeech.1997-532

  author={Rainer Stiefelhagen and Uwe Meier and Jie Yang},
  title={{Real-time lip-tracking for lipreading}},
  booktitle={Proc. 5th European Conference on Speech Communication and Technology (Eurospeech 1997)},