The performances of speech recognizers drop substantially when there is a mismatch between training and testing conditions. Approaches based on a channel model generally assume that the training data is noise-free, and the test data is noisy. In practice, this assumption is seldom correct In this paper, we propose an iterative algorithm to compensate for noise in both the training and test data. The adopted approach compensates the speech model parameters using the noise present in the test data, and compensates the test data frames using the noise present in the training data. No assumptions are made about the types of noises present in both the training and test data. They are assumed to not have been recorded under the same conditions, and are likely to come from different and unknown microphones and acoustic environments. The effectiveness of such a compensation scheme has been assessed on the MASK task using a continuous density HMM-based speech recognizer. In this work we outline the compensation technique for treating noises in both the training and test data. We then provide experimental results using this method, as well as using MLLR adaptation to compensate for the residual mismatch.