This paper overviews the main methods that have recently been investigated for making speech recognition systems more flexible at both the acoustic and linguistic processing levels. Improved flexibility will enable such systems to work well over a wide range of unexpected and adverse conditions by helping them to cope with variations between training and testing speech utterances. This paper focuses on the Bayesian adaptive learning approach, the minimum classification error (MCE) approach, the HMM composition technique, and spontaneous speech recognition techniques.