Touchscreen keyboards rely on language modeling to auto-correct noisy typing and to offer word predictions. While language models can be pre-trained on huge amounts of text, they may fail to capture a user's unique writing style. Using a recently released email personalization dataset, we show improved performance compared to a unigram cache by adapting to a user's text via language models based on prediction by partial match (PPM) and recurrent neural networks. On simulated noisy touchscreen typing of 44 users, our best model increased keystroke savings by 9.9% relative and reduced word error rate by 36% relative compared to a static background language model.