We introduce a technique for dynamically applying contextually-derived language models to a state-of-the-art speech recognition system. These generally small-footprint models can be seen as a generalization of cache-based models [1], whereby contextually salient n-grams are derived from relevant sources (not just user generated language) to produce a model intended for combination with the baseline language model. The derived models are applied during first-pass decoding as a form of on-the-fly composition between the decoder search graph and the set of weighted contextual n-grams. We present a construction algorithm which takes a trie representing the contextual n-grams and produces a weighted finite state automaton which is more compact than a standard n-gram machine. Finally, we present a set of empirical results on the recognition of spoken search queries where a contextual model encoding recent trending queries is applied using the proposed technique.