Multilingual Automatic Speech Recognition (ASR) presents several difficulties, especially when multiple languages are being spoken in the same audio. Traditional multilingual ASR systems often rely on low-resource Indic language data and language-specific models, which limits their scalability and efficiency. Creating individual models is difficult due to the lack of Indic language data, while the need for an accurate language identification (LID) model further affects the downstream task. Our method integrates LID and multilingual ASR in a unified framework, leveraging their symbiotic relationship to overcome limitations. This study presents an approach to multilingual ASR incorporating LID capabilities using Whisper as the baseline architecture. Experimental results on benchmark datasets demonstrate our method’s effectiveness, which shows an absolute 19.1% improvement in Word Error Rate (WER) while enhancing LID performance by 6% in terms of Diarization Error Rate (DER).