doi: 10.21437/AVSP.2017
Acoustic cue variability affects eye movement behaviour during non-native speech perception: a GAMM model
Jessie S. Nixon, Catherine T. Best
The effect of age and hearing loss on partner-directed gaze in a communicative task
Chris Davis, Jeesun Kim, Outi Tuomainen, Valerie Hazan
Referential Gaze Makes a Difference in Spoken Language Comprehension: Human Speaker vs. Virtual Agent Listener Gaze
Eva Maria Nunnemann, Kirsten Bergmann, Helene Kreysa, Pia Knoeferle
The influence of handedness and pointing direction on deictic gestures and speech interaction: Evidence from motion capture data on Polish counting-out rhymes
Katarzyna Stoltmann, Susanne Fuchs
The Influence of Familial Sinistrality on Audiovisual Speech Perception
Sandhya Vinay, Dawn Behne
Using deep neural networks to estimate tongue movements from speech face motion
Christian Kroos, Rikke Bundgaard-Nielsen, Catherine Best, Mark D. Plumbley
End-to-End Audiovisual Fusion with LSTMs
Stavros Petridis, Yujiang Wang, Zuwei Li, Maja Pantic
Using visual speech information and perceptually motivated loss functions for binary mask estimation
Danny Websdale, Ben Milner
Combining Multiple Views for Visual Speech Recognition
Marina Zimmermann, Mostafa Mehdipour Ghazi, Hazim Kemal Ekenel, Jean-Philippe Thiran
On the quality of an expressive audiovisual corpus: a case study of acted speech
Slim Ouni, Sara Dahmani, Vincent Colotte
Thin slicing to predict viewer impressions of TED Talks
Ailbhe Cullen, Naomi Harte
Exploring ROI size in deep learning based lipreading
Alexandros Koumparoulis, Gerasimos Potamianos, Youssef Mroueh, Steven J. Rennie
Towards Lipreading Sentences with Active Appearance Models
George Sterpu, Naomi Harte
Lipreading using deep bottleneck features for optical and depth images
Satoshi Tamura, Koichi Miyazaki, Satoru Hayamizu
Inner Lips Parameter Estimation based on Adaptive Ellipse Model
Li Liu, Gang Feng, Denis Beautemps
Processing of visuo-auditory prosodic information in cochlear-implanted patients deaf patients
Pascal Barone, Mathieu Marx, Anne Lasfargues-Delannoy
Acoustic features of multimodal prominences: Do visual beat gestures affect verbal pitch accent realization?
Gilbert Ambrazaitis, David House
Contribution of visual rhythmic information to speech perception in noise
Vincent Aubanel, Cassandra Masters, Jeesun Kim, Chris Davis
Perceived Audiovisual Simultaneity in Speech by Musicians and Nonmusicians: Preliminary Behavioral and Event-Related Potential (ERP) Findings
Dawn Behne, Marzieh Sorati, Magnus Alm
The developmental path of multisensory perception of emotion and phoneme in Japanese speakers
Hisako W. Yamamoto, Misako Kawahara, Akihiro Tanaka
Impact of Culture on the Development of Multisensory Emotion Perception
Misako Kawahara, Disa Sauter, Akihiro Tanaka
Multisensory Perception of Emotion for Human and Chimpanzee Expressions by Humans
Marina Kawase, Ikuma Adachi, Akihiro Tanaka
Cross-Language Perception of Audio-visual Attitudinal Expressions
Hansjörg Mixdorff, Angelika Hönemann, Albert Rilliard, Tan Lee, Matthew Ma
Facial activity of attitudinal speech in German
Angelika Hoenemann, Petra Wagner
The McGurk Effect: Auditory Visual Speech Perception’s Piltdown Man
Dominic Massaro
Impact of early bilingualism on infants’ ability to process talking and non-talking faces: new data from 9-month-old infants
Mathilde Fort, Núria Sebastián-Gallés
Atypical phonemic discrimination but not audiovisual speech integration in children with autism and the broader autism phenotype.
Julia Irwin, Trey Avery, Jacqueline Turcios, Lawrence Brancazio, Barbara Cook, Nicole Landi
Learning to recognize unfamiliar talkers from the word-level dynamics of visual speech
Alexandra Jesse, Paul Saba
Applying the summation model in audiovisual speech perception
Kaisa Tiippana, Ilmari Kurki, Tarja Peromaa
Article |
---|