ISCA Archive AVSP 2008 Sessions Search
  ISCA Archive Sessions Search

Click on column names to sort.

Searching uses the 'and' of terms e.g. Smith Interspeech matches all papers by Smith in any Interspeech. The order of terms is not significant.

Use double quotes for exact phrasal matches e.g. "acoustic features".

Case is ignored.

Diacritics are optional e.g. lefevre also matches lefèvre (but not vice versa).

It can be useful to turn off spell-checking for the search box in your browser preferences.

If you prefer to scroll rather than page, increase the number in the show entries dropdown.


Auditory-Visual Speech Processing

Tangalooma Wild Dolphin Resort, Moreton Island, Queensland, Australia
26-29 September 2008

Contributed Papers

On evaluating synthesised visual speech
Barry-John Theobald, Nicholas Wilkinson, Iain Matthews

Building a portable gesture-to-audio/visual speech system
Sidney Fels, Robert Pritchard, Eric Vatikiotis-Bateson

The effects of temporal asynchrony on the intelligibility of accelerated speech
Douglas S. Brungart, Nandini Iyer, Brian D. Simpson, Virginie van Wassenhove

Audio-visual voice command recognition in noisy conditions
Josef Chaloupka, Jan Nouza, Jindrich Zdansky

Perception of ‘speech-and-gesture² integration
Gianluca Giorgolo, Frans A. J. Verstraten

Analysis of inter- and intra-speaker variability of head motions during spoken dialogue
Carlos Toshinori Ishi, Hiroshi Ishiguro, Norihiro Hagita

German text-to-audiovisual-speech by 3-d speaker cloning
Sascha Fagel, Gérard Bailly

Visual field advantage in the perception of audiovisual speech segments
Dawn Behne, Yue Wang, Stein-Ove Belsby, Solveig Kaasa, Lisa Simonsen, Kirsti Back

CENSREC-AV: evaluation frameworks for audio-visual speech recognition
Satoshi Tamura, Chiyomi Miyajima, Norihide Kitaoka, Satoru Hayamizu, Kazuya Takeda

Mcgurk effect persists with a partially removed visual signal
Christian Kroos, Ashlie Dreves

Guided non-linear model estimation (gnoME)
Sascha Fagel, Katja Madany

Multimodal perception of anticipatory behavior - Comparing blind, hearing and cued speech subjects
Emilie Troille, Marie-Agnès Cathiard, Christian Abry, Lucie Ménard, Denis Beautemps

Patch-based analysis of visual speech from multiple views
Patrick Lucey, Gerasimos Potamianos, Sridha Sridharan

A comparison of German talking heads in a smart home environment
Sascha Fagel, Christine Kuehnel, Benjamin Weiss, Ina Wechsung, Sebastian Moeller

Effect of audio-visual asynchrony between time-expanded speech and a moving image of a talker²s face on detection and tolerance thresholds
Shuichi Sakamoto, Akihiro Tanaka, Shun Numahata, Atsushi Imai, Tohru Takagi, Yôiti Suzuki

A neurofunctional model of speech production including aspects of auditory and audio-visual speech perception
Bernd J. Kröger, Jim Kannampuzha

Auditory-visual perception of prosodic information: inter-linguistic analysis - contrastive focus in French and Japanese
Marion Dohen, Chun-Huei Wu, Harold Hill

May speech modifications in noise contribute to enhance audio-visible cues to segment perception?
Maëva Garnier

Audiovisual alignment in child-directed speech facilitates word learning
Alexandra Jesse, Elizabeth K. Johnson

Hearing a talking face: an auditory influence on a visual detection task
Jeesun Kim, Christian Kroos, Chris Davis

Speaking with smile or disgust: data and models
Gérard Bailly, Antoine Bégault, Frédéric Elisei, Pierre Badin

A multilevel fusion approach for audiovisual emotion recognition
Girija Chetty, Michael Wagner

Statistical correlation analysis between lip contour parameters and formant parameters for Mandarin monophthongs
Junru Wu, Xiaosheng Pan, Jiangping Kong, Alan Wee-Chung Liew

From talking to thinking heads: report 2008
Denis Burnham, A. Abrahamyan, L. Cavedon, Chris Davis, A. Hodgins, Jeesun Kim, Christian Kroos, Takaaki Kuratate, T. Lewis, M. Luerssen, G. Paine, D. Powers, M. Riley, Stelarc Stelarc, K. Stevens

Algorithm for computing spatiotemporal coordination
Adriano V. Barbosa, Hani C. Yehia, Eric Vatikiotis-Bateson

Fused HMM adaptation of synchronous HMMs for audio-visual speaker verification
David Dean, Sridha Sridharan

Describing "INTERFACE" a Matlab© tool for building talking heads
Piero Cosi, Graziano Tisato

Analysis of technologies and resources for multimodal information kiosk for deaf users
Miloš Železný

Retargeting cued speech hand gestures for different talking heads and speakers
Gérard Bailly, Yu Fang, Frédéric Elisei, Denis Beautemps

A, V, and AV discrimination of vowel duration
Björn Lidestam

Towards real-time speech-based facial animation applications built on HUGE architecture
Goranka Zoric, Igor S. Pandzic

Improving pain recognition through better utilisation of temporal information
Patrick Lucey, Jessica Howlett, Jeffrey F. Cohn, Simon Lucey, Sridha Sridharan, Zara Ambadar

Linguistically valid movement behavior measured non-invasively
Adriano V. Barbosa, Hani C. Yehia, Eric Vatikiotis-Bateson

The challenge of multispeaker lip-reading
Stephen Cox, Richard Harvey, Yuxuan Lan, Jacob Newman, Barry-John Theobald

Audio-visual feature selection and reduction for emotion classification
Sanaul Haq, Philip J. B. Jackson, James D. Edge

Text-to-AV synthesis system for Thinking Head Project
Takaaki Kuratate

Objective and perceptual evaluation of parameterizations of 3d motion captured speech data
Katja Madany, Sascha Fagel

Listening while speaking: new behavioral evidence for articulatory-to-auditory feedback projections
Marc Sato, Emilie Troille, Lucie Ménard, Marie-Agnès Cathiard, Vincent Gracco

Age-related experience in audio-visual speech perception
Magnus Alm, Dawn Behne

A model for the dynamics of articulatory lip movements
Þórir Harðarson, Hans-Heinrich Bothe

Evaluation of synthesized sign and visual speech by deaf
Zdeněk Krňoul, Patrik Roštík, Miloš Železný

Lip segmentation using adaptive color space training
Erol Ozgur, Berkay Yilmaz, Harun Karabalkan, Hakan Erdogan, Mustafa Unel

Static and dynamic lip feature analysis for speaker verification
S. L. Wang, Alan Wee-Chung Liew

Parameterisation of 3d speech lip movements
James D. Edge, Adrian Hilton, Philip J. B. Jackson

A comparative study of 2d and 3d lip tracking methods for AV ASR
Roland Göcke, Akshay Asthana

Search papers

Invited Papers

Contributed Papers