Maximized Modality or constrained consistency?
Clifford Nass, Li Gong
Infants' perception of the audible, visible and bimodal attributes of talking and singing faces
David J. Lewkowicz
Cross-Modal Integration: Bringing Coherence to the Sensory World
Barry E. Stein, Mark T. Wallace, Wan Jiang, Huai Jian, J. William Vaughn
Visual context effects on the perception of /r/ and /l/: Varying F1 and F2 acoustic characteristics
Linda W. Norrix, Kerry P. Green
The contribution of visual information to on-line sentence processing: Evidence from phoneme monitoring
Ethan A. Cox, Linda W. Norrix, Kerry P. Green
Lateralized event-related cortical potentials in discriminating images of facial speech
M. de Haan, Ruth Campbell
Activation in auditory cortex by speechreading in hearing people: FMRI studies
Ruth Campbell, G. Calvert, M. Brammer, M. MacSweeney, S. Surguladze, P. McGuire, B. Woll, S. Williams, E. Amaro, A.S. David
Effect of facial brightness reversal on visual and audiovisual speech perception
Rika Kanzaki, Ruth Campbell
An analysis of the effects of clear speech on the visual-speech intelligibility of consonants
J.P. Gagné, M.J. Charest, A.J. Rochette
Modality, perceptual encoding speed, and time-course of phonetic information
Philip Franz Seitz, Ken W. Grant
Lexical influences on the McGurk effect
Lawrence Brancazio
Perception of clearly presented foreign language sounds: The effects of visible speech
Chris Davis, Jeesun Kim
The integration of auditory and visual speech information with foreign speakers: The role of expectancy
Denis Burnham, Susanna Lau
Automatic computer lip-reading using fuzzy set theory
James F. Baldwin, Trevor P. Martin, Mehreen Saeed
A diffusion network approach to visual speech recognition
Javier R. Movellan, Paul Mineiro
Feature based representation for audio-visual speech recognition
Partha Niyogi, Eric Petajan, Jialin Zhong
Audio-visual sensor fusion with neural architectures
B. Talle, A. Wichert
On the use of visual information for improving audio-based speaker recognition
Andrew Senior, Chalapathy V. Neti, Benoit Maison
Estimation of speech acoustics from visual speech features: A comparison of linear and non-linear models
J. P. Barker, F. Berthommier
Facial deformation parameters for audiovisual synthesis
E. Vatikiotis-Bateson, Takaaki Kuratate, Myuki Kamachi, Hani Yehia
Synthetic visual speech driven from auditory speech
Eva Agelfors, Jonas Beskow, Björn Granström, Magnus Lundeberg, Giampiero Salvi, Karl-Eric Spens, Tobias Öhman
A text-speech synchronization technique with applications to talking heads
Fabio Vignoli, Carlo Braccini
Picture my voice: Audio to visual speech synthesis using artificial neural networks
Dominic W. Massaro, Jonas Beskow, Michael M. Cohen, Christopher L. Fry, Tony Rodriguez
A symbolic system for multi-purpose description of the mouth shapes
Kazuya Imaizumi, Shizuo Hiki, Yumiko Fukuda
A tool for designing MPEG-4 compliant expressions and animations on VRML cartoon-faces
Georg Fries, Aldo Paradiso, Frank Nack, Karlheinz Schuhmacher
Developing a 3D-agent for the august dialogue system
Magnus Lundeberg, Jonas Beskow
Audio-visual speech synthesis for finnish
Jean-Luc Olives, Riikka Mottonen, Janne Kulju, Mikko Sams
Face translation: A multimodal translation agent
Max Ritter, Uwe Meier, Jie Yang, Alex Waibel
| Article |
|---|