Language models for speech recognition tend to be brittle across domains, since their performance is vulnerable to changes in the genre or topic of the text on which they are trained. A number of adaptation methods, discovering either lexical co-occurrence or topic cues, have been developed to mitigate this problem with varying degrees of success. Among them, a more recent thread of work is the relevance modeling approach, which has shown promise to capture the lexical co-occurrence relationship between the search history and an upcoming word. However, a potential downside to such an approach is the need of resorting to a retrieval procedure to obtain relevance information; this is usually complex and time-consuming for practical applications. In this paper, we propose a word relevance modeling framework, which introduces a novel use of relevance information for dynamic language model adaptation in speech recognition. It not only inherits the merits of several existing techniques but also provides a flexible but systematic way to render the lexical and topical relationships between the search history and the upcoming word. Experiments on large vocabulary continuous speech recognition seem to demonstrate the performance merits of the methods instantiated from this framework when compared to some methods.
Index Terms: language model, relevance, lexical co-occurrence, topic cues, adaptation