In this paper we present a method for generating lip-synched synthetic faces using phonetically clustered data. This method allows us to train lip movement from a database of facial trajectories that have been recorded synchronously with speech data. The whole process is automatic and involves no hand processing of the data once the database has been collected. The main discussion will focus on the analysis of real-life data and the generation of a set of regression trees that will allow us to synthesize speech-related facial movements that can drive a three dimensional model of a face.