Recent advances in Large Language Models (LLMs) have been extended to the field of Automatic Speech Recognition (ASR), leveraging the exceptional generative capabilities of LLMs for speech transcription generation. However, concerns about privacy risks have emerged since both LLMs and large ASR models may unintentionally memorize training examples. After deploying a model, practitioners might be asked to remove any personal data from the model. Re-training the original model every time of these requests is computationally expensive. In this work, we provide the first investigation into the measurement and detection of unintended memorization in LLM-based ASR models and present an efficient solution for unlearning memorized data upon requests. Experiments on the LibriSpeech dataset show that the proposed method is effective and achieves strong privacy-utility trade-offs.