This paper describes the application of probabilistic shift-reduce parsing to the problem of continuous speech recognition. In previous work, a probabilistic LR parser was applied to the task of speech understanding in the MIT voyager system. The performance metric for that task was based on the fraction of utterances for which correct semantics were produced. For the speech recognition task, word and sentence recognition accuracy are the important performance criteria. In this work, the probabilistic LR language model is extended with robust parsing techniques to achieve 100% coverage, even though the underlying grammar has incomplete coverage. The resulting model provides additional constraints over a word bigram, but retains the trainability and efficiency of the simpler model. Recognition experiments were performed in the DARPA atis task domain using a version of the MIT summit speech recognizer with context-independent phoneme models. Integrating the new language model into the summit JV-best search algorithm decreased the word error rate from 24.1% to 21.5% and the sentence error rate from 72.9% to 65.2%, when compared to a word bigram model.