We address the use of multiple pronunciations to improve large-vocabulary continuous-speech recognition. Based on extensive tests with WSJ material, our results show that more consistent transcriptions and alternate pronunciations lead to an error reduction of 9% while at the same time reducing the number of mixture parameters. Next, we explain how the problem of cross-word liaisons has been treated when extending our system to dictation in French. Our solution consists in using phonological rules that are optionally applied both in training and recognition. Using the BREF corpus, it is shown that proper handling of liaisons improves the accuracy by about 10% for a 20K speaker-independent task.