Various experiments have conclusively shown that superior continuous speech recognition performance is obtained by using context dependent models for words. We have observed that using the phonetic context to the right across words when constructing word models improves recognition performance. In a stack decoder for large vocabulary continuous speech recognition, however, the right context information across words is not available when constructing a model for a hypothesized word, since we do not have any indication of what the following word is when computing the match for one word. In this paper we describe a look-ahead scheme that allows us to get some estimate of the right context to obtain an accurate match for a word.