This paper introduces a novel language model (LM) adaptation method based on mixture of latent word language models (LWLMs). LMs are often constructed as mixture of n-gram models, whose mixture weights are optimized using target domain data. However, n-gram mixture modeling is not flexible enough for domain adaptation because model merger is conducted on the observed word space. Since the words in out-of-domain LMs often differ from those in the target domain LM, it is hard for out-of domain LMs to offer adequate adaptation performance. Our solution is to carry out model merger in a latent variable space created from LWLMs. The latent variables in the LWLMs are represented as specific words selected from the observed word space, so LWLMs can share a common latent variable space and we can realize mixture modeling with consideration of the latent variable space. Following this change, this paper also describes a method to estimate mixture weights for LWLM mixture modeling. We use a sampling technique based on the Bayesian criterion in place of the conventional expectation maximization algorithm. Our experiments show that the LWLM mixture modeling is more effective than n-gram mixture modeling.