Speech foundation models have shown significant success in various speech-processing applications. However, fine-tuning these models on dysarthric speech is challenging due to overfitting caused by limited dataset sizes. This work proposes a modified multitask learning framework to mitigate overfitting in foundation model fine-tuning. Specifically, we train the model on a more complex task along with the task of interest and use gradient projection to preserve beneficial updates while resolving conflicts. We demonstrate that using automatic speech recognition as the main task and dysarthria detection as the auxiliary task improves model robustness and dysarthria detection performance. The proposed method1 reduces overfitting and improves in-corpus and cross-corpus detection accuracy by 5.4% to 13.4% compared to standard multi-task learning. These findings highlight the importance of structured multitask training for enhancing foundation model adaptability.