In this paper we address the problem of using background information in order to improve models obtained on insufficient amounts of training material, without removing the characteristics of the original material. We introduce the method of language model fill-up that appears to be better suited for this purpose than classical linear interpolation. The specific task we chose to perform tests on is that of adapting a language model to a speaker, which is particularly important for speaker-dependent dictation systems.