It is often argued that acoustic-phonetic or articulatory features could be beneficial to automatic speech recognition because they provide a convenient interface between the acoustic and the linguistic level. Former research has shown that a combination of acoustic and articulatory information can lead to improved ASR. However there exists no purely articulatory driven ASR system that outperforms state-of-the-art systems driven by acoustic features. In this paper we propose a novel method for improving ASR on the basis of articulatory features. It is designed to take account of (1) the correlations between articulatory features and (2) the fact that not all articulatory features are relevant for the description of a certain phonetic unit. We also investigate to what extend an acoustic and an articulatory feature driven system make different errors.