This article describes requirements and a prototype system for a flexible multimodal human-machine interaction in two substantially different mobile environments, namely pedestrian and car. The system allows an integrated trip planning using multi-modal input and output. Motivated by the specific safety and privacy requirements in both environments, we present a framework for flexible modality control. A characteristic feature of our framework is the insight that both user and system may independently and asynchronously initiate a modality transition. We conclude with a brief discussion of further issues and research questions.