ISCA Archive AVSP 1998
ISCA Archive AVSP 1998

Generation of Lip-Synched Synthetic Faces From Phonetically Clustered Face Movement Data

Francisco M. Galanes, Jack Unverferth, Levent M. Arslan, David Talkin

In this paper we present a method for generating lip-synched synthetic faces using phonetically clustered data. This method allows us to train lip movement from a database of facial trajectories that have been recorded synchronously with speech data. The whole process is automatic and involves no hand processing of the data once the database has been collected. The main discussion will focus on the analysis of real-life data and the generation of a set of regression trees that will allow us to synthesize speech-related facial movements that can drive a three dimensional model of a face.


Cite as: Galanes, F.M., Unverferth, J., Arslan, L.M., Talkin, D. (1998) Generation of Lip-Synched Synthetic Faces From Phonetically Clustered Face Movement Data. Proc. Auditory-Visual Speech Processing, 191-194

@inproceedings{galanes98_avsp,
  author={Francisco M. Galanes and Jack Unverferth and Levent M. Arslan and David Talkin},
  title={{Generation of Lip-Synched Synthetic Faces From Phonetically Clustered Face Movement Data}},
  year=1998,
  booktitle={Proc. Auditory-Visual Speech Processing},
  pages={191--194}
}