In this paper 1 two different models of pronunciation are presented: the first model is based on a rule set compiled by an expert, while the second is statistically based, exploiting a survey about pronunciation variants occurring in training data. Both models generate pronunciation variants from the canonic forms of words. The two models are evaluated by applying them to the task of automatic segmentation of speech and then comparing the results to manual segmentations of the same speech data. Results show that correspondence between manual and automatic segmentations can be significantly improved if pronunciation variants are taken into account. The statistical model outperforms the rule based model.