The construction of a speech understanding application requires a method for extracting language models of appropriate size and perplexity from the application grammar. We describe a method for approximating context-free grammars by finite-state models with a range of sizes and perplexities and present experimental results of its application to a variety of grammars, including a fairly large grammar for a spoken-language translation application.