In this paper, we apply a general and discriminative feature ”GIF” (Genetic Algorithm based Informative feature) to lipreading (visual speech recognition), and improve the lipreading performance using speaker adaptation. The feature extraction method consists of two transforms, which convert an input vector into GIF for recognition. In the speaker adaptation, MAP (Maximum A Posteriori) adaptation is used to adapt a recognition model to a target speaker. Recognition experiments of continuous digit utterances were conducted using an audio-visual corpus CENSREC-1-AV [1] including more than 268,000 lip images. At first, we compared the GIF-based method with the baseline method employing conventional eigenlip features, using two kinds of images: pictures in the database around speakers’ mouth, and extracted images only containing lips. Secondly, we evaluated the effectiveness of speaker adaptation for lipreading. The result of comparison shows that the GIFbased approach achieved slightly better than the baseline method. And it is found using the mouth-around images is more suitable than lip-only images. Furthermore, the result of speaker adaptation shows that speaker adaptation significantly improved recognition accuracy in the GIF-based method; after the adaptation, the recognition rate drastically increased from approximately 30% to 70%.
S.Tamura et al., “CENSREC-1-AV: An audio-visual corpus for noisy bimodal speech recognition”, Proc. AVSP2010, pp.85-88 (2010).
Index Terms: discriminative feature, lipreading, speaker adaptation, lip extraction, CENSREC