Visual speech influences speeded auditory identification
Tim Paris, Jeesun Kim, Chris Davis
Do infants detect a-v articulator congruency for non-native click consonants?
Catherine T. Best, Christian Kroos, Julia Irwin
Perceiving visual prosody from point-light displays
Erin Cvejic, Jeesun Kim, Chris Davis
Binding and unbinding the Mcgurk effect in audiovisual speech fusion: follow-up experiments on a new paradigm
Olha Nahorna, Frédéric Berthommier, Jean-Luc Schwartz
Children’s expression of uncertainty in collaborative and competitive contexts
Mandy Visser, Emiel Krahmer, Marc Swerts
The effect of seeing the interlocutor on auditory and visual speech production in noise
Michael Fitzpatrick, Jeesun Kim, Chris Davis
Auditory-visual discrimination and identification of lexical tone within and across tone languages
Denis Burnham, Virginie Attina, Benjawan Kasisopa
Audiovisual perception of counter-expectational questions
Joan Borràs-Comes, Cecilia Pugliesi, Pilar Prieto
Introducing visual target cost within an acoustic-visual unit-selection speech synthesizer
Utpala Musti, Vincent Colotte, Asterios Toutios, Slim Ouni
Auditory and photo-realistic audiovisual speech synthesis for Dutch
Wesley Mattheyses, Lukas Latacz, Werner Verhelst
Photo-realistic visual speech synthesis based on AAM features and an articulatory DBN model with constrained asynchrony
Peng Wu, Dongmei Jiang, He Zhang, Hichem Sahli
Audiovisual speech processing in visual speech noise
Jeesun Kim, Chris Davis
Audiovisual streaming in voicing perception: new evidence for a low-level interaction between audio and visual modalities
Frédéric Berthommier, Jean-Luc Schwartz
An ordinal model of the Mcgurk illusion
Tobias S. Andersen
Thin slices of head movements during problem solving reveal level of difficulty
Bart Joosten, Marije van Amelsvoort, Emiel Krahmer, Eric Postma
Dimensional mapping of multimodal integration on audiovisual emotion perception
Yoshiko Arimoto, Kazuo Okanoya
Turn-taking control using gaze in multiparty human-computer dialogue: effects of 2d and 3d displays
Samer Al Moubayed, Gabriel Skantze
Bilingual corpus for AVASR using multiple sensors and depth information
Georgios Galatas, Gerasimos Potamianos, Dimitrios Kosmopoulos, Chris McMurrough, Fillia Makedon
Kinetic data for large-scale analysis and modeling of face-to-face conversation
Jonas Beskow, Simon Alexandersson, Samer Al Moubayed, Jens Edlund, David House
“Mask-bot” - a life-size talking head animated robot for AV speech and human-robot communication research
Takaaki Kuratate, Brennard Pierce, Gordon Cheng
Development of communication support system using lip reading
Takeshi Saitoh
LUCIA-webGL: a web based Italian MPEG-4 talking head
Giuseppe Riccardo Leone, Piero Cosi
Improved detection of ball hit events in a tennis game using multimodal information
Qiang Huang, Stephen Cox, Fei Yan, Teo de Campos, David Windridge, Josef Kittler, William Christmas
Speech-driven lip motion generation for tele-operated humanoid robots
Carlos T. Ishi, Chaoran Liu, Hiroshi Ishiguro, Norihiro Hagita
On the audiovisual asynchrony of speech
László Czap
Talking heads for elderly and Alzheimer patients (THEA): project report and demonstration
Sascha Fagel
Improving naturalness of visual speech synthesis
László Czap, János Mátyás
A robotic head using projected animated faces
Samer Al Moubayed, Simon Alexandersson, Jonas Beskow, Björn Granström
| Article |
|---|