ISCA Archive Interspeech 2015
ISCA Archive Interspeech 2015

BLSTM neural networks for speech driven head motion synthesis

Chuang Ding, Pengcheng Zhu, Lei Xie

Head motion naturally occurs in synchrony with speech and carries important intention, attitude and emotion factors. This paper aims to synthesize head motions from natural speech for talking avatar applications. Specifically, we study the feasibility of learning speech-to-head-motion regression models by two types of popular neural networks, i.e., feed-forward and bidirectional long short-term memory (BLSTM). We discover that the BLSTM networks apparently outperform the feed-forward ones in this task because of their capacity of learning long-range speech dynamics. More interestingly, we observe that stacking different networks, i.e., inserting a feed-forward layer into two BLSTM layers, achieves the best performance. Subjective evaluation shows that this hybrid network can produce more plausible head motions from speech.