This paper demonstrates combinations of various language model (LM) technologies simultaneously, not only modeling techniques but also those for training data expansion based on external language resources and unsupervised adaptation for spontaneous speech recognition. Although forming combinations of various LM technologies has been examined, previous works focused on only modeling techniques. In fact, the previous works did not consider other important functionalities in practical spontaneous language modeling; a use of external language resources and an unsupervised LM adaptation. Therefore, our examination employs not only manual transcriptions of target domain speech but also out-of-domain text resources for spontaneous language modeling. In addition, the unsupervised LM adaptation based on multi-pass decoding is aggressively introduced to the combination. Our experimental results show a significant word error rate reduction by combining various technologies compared to using each technology individually in Japanese spontaneous speech recognition task. Furthermore, we also reveal relationships between the technologies.