Recent studies show that recurrent neural network language models (RNNLM) perform better than traditional language models such as smoothed n-grams. For traditional models it is known that the addition of for example part-of-speech information and topical information can improve performance. In this paper we investigate the usefulness of additional features for RNNLM. We look at four types of features: POS tags, lemmas, and the topics and the socio-situational setting of a conversation. In our experiments, almost all RNNLM models that make use of extra information outperform our baseline RNNLM model in terms of both perplexity and word prediction accuracy. Whereas the baseline model has a perplexity of 114.79, the model that uses a combination of POS tags, socio-situational settings and lemmas achieves the lowest perplexity result of 83.59, and the combination of all 4 types of features, using a network with 500 hidden neurons, achieves the highest word prediction accuracy of 23.11%.
Index Terms: socio-situational setting, part of speech, lemma, topic, recurrent neural networks.