This paper describes the 2006 lecture recognition system developed at the Interactive Systems Laboratories (ISL), for individual headmicrophone (IHM), single distant microphone (SDM), and multiple distant microphones (MDM) conditions. It was evaluated in RT-06S rich transcription meeting evaluation sponsored by the US National Institute of Standards and Technologies (NIST). We describe the principal differences between our current system and those submitted in previous years, namely, improved acoustic and language models, cross adaptation between systems with different front-ends and phoneme sets, and the use of various automatic speech segmentation algorithms. Our system achieved word error rates of 38.5% (53.4%) and 22.9% (32.2%), respectively, on the MDM and IHM conditions of the RT-05S (RT-06S) lecture evaluation set.