Recent advancements in automatic speech recognition such as Wav2vec 2.0 and Whisper, confront deployment challenges due to their substantial model parameters. Model compression through joint distillation and structured pruning emerges as an effective solution but still faces overfitting and catastrophic forgetting, exacerbated by domain shifts or limited data availability. To address this issue, we propose the gradient-guided parameter regularization method aimed at maintaining the model's generality. Our approach employs gradient values to detect overfit-prone parameters in the student model and subsequently regularize these parameters to align closely with their counterparts in the teacher model. Through extensive experiments, we demonstrate the efficacy of our approach in reducing overfitting and enhancing performance, especially in scenarios characterized by domain shifts and limited data availability.