In previous work, we showed how to constrain the estimation of continuous mixture-density hidden Markov models (HMMs) when the amount of adaptation data is small. We used maximum-likelihood (ML) transformation-based approaches and Bayesian techniques to achieve near native performance when testing nonnative speakers of the recognizer language. In this paper, we study various ML-based techniques and compare experimental results on data sets with recordings from nonnative and native speakers of American English. We divide the transformation-based techniques into two groups. In feature-space techniques, we hypothesize an underlying transformation in the feature-space that results in a transformation of the HMM parameters. In model-space techniques, we hypothesize a direct transformation of the HMM parameters. In the experimental section we show how the combination of the best ML and Bayesian adaptation techniques result in significant improvements in recognition accuracy. All the experiments were carried out with SRTs DECIPHER(TM) speech recognition system.