This article presents our first simulation of a feedforward controller for an articulatory model. Our approach is inspired by motor control theory: speech is considered as a sequence of gestures made audible. These gestures are task-oriented and realized by a biological system (speech production system) with degrees of freedom in excess. By addition of general constraints on the learning process of an entire movement, this model is able to infer smooth articulatory gestures while perserving distinctiveness. The computational model is based on the sequential network's paradigm as introduced by Jordan (Jordan, 1988). We show that such a model is able to solve articulatory-to-acoustic inversion showing anticipatory behavior and compensation effects as occur in real speech.