We report on our recent facial animation work to improve the realism and accuracy of visual speech synthesis. The general approach is to use both static and dynamic observations of natural speech to guide the facial modeling. One current goal is to model the internal articulators of a highly realistic palate, teeth, and an improved tongue. Because our talking head can be made transparent, we can provide an anatomically valid and pedagogically useful display that can be used in speech training of children with hearing loss [1]. High-resolution models of palate and teeth [2] were reduced to a relatively small number of polygons for real-time animation [3]. For the improved tongue, we are using 3D ultrasound data and electropalatography (EPG) [4] with error minimization algorithms to educate our parametric B-spline based tongue model to simulate realistic speech. In addition, a high-speed algorithm has been developed for detection and correction of collisions, to prevent the tongue from protruding through the palate and teeth, and to enable the real-time display of synthetic EPG patterns.