Frequency warping approaches to speaker normalization have been proposed and evaluated on various speech recognition tasks [1, 2, 3]. In all cases, frequency warping was found to significantly improve recognition performance by reducing the mismatch between test utterances presented to the recognizer and the speaker independent HMM model. Maximum likelihood (ML) based model adaptation techniques have also been applied to reducing model mismatch by estimating a linear transformation that is applied to the model parameters in order to increase the likelihood of the input utterance. This paper demonstrates that significant advantage can be gained by performing frequency warping and ML speaker adaptation in a unified framework. A procedure is described which compensates utterances by simultaneously scaling the frequency axis and reshaping the spectral energy contour. This procedure is shown to reduce the error rate for a telephone based connected digit recognition task by as much as 38% in a single utterance based adaptation scenario.