Articulatory data such as that collected via electromagnetic articulography (EMA) are valuable for many speech applications, including speech modeling, recognition, and synthesis. Nearly all current EMA applications and methods focus on the use of positional sensor data, even though modern 3D-EMA systems also capture sensor orientation, which provides significant additional information about articulator posture and vocal tract shape. To address this problem, this paper introduces a new method for interpolating untracked tongue fleshpoint positions from the combined position and orientation data of three EMA sensors on the tongue, using additional reference sensors for evaluating interpolation error. Comparison of interpolated and measured data illustrated effectiveness of the new method in providing additional tongue shape and position features. The results suggest that analytic methods that combine sensor position and orientation data are able to improve the characterization of tongue kinematics even using a small number of EMA sensors.