This paper presents preliminary analysis and modelling of facial motion capture data recorded on a speaker uttering nonsense syllables and sentences with various acted facial expressions. We analyze here the impact of facial expressions on articulation and determine prediction errors of simple models trained to map neutral articulation to the various facial expressions targeted. We show that movement of some speech organs such as the jaw and lower lip are relatively unaffected by the facial expressions considered here (smile, disgust) while others such as the movement of the upper lip or the jaw translation are quite perturbed. We also show that these perturbations are not simply additive, and that they depend on articulation.