Recently, the size of automatic speech recognition (ASR) models has been increasing, similar to large language models (LLMs), and efficient tuning to enhance the performance of downstream tasks with limited resources remains a challenge. In this paper, we propose a simple and effective downstream task tuning method using task vectors. We utilize task vectors to orient the pre-trained Whisper model in the weight space, moving in that direction to achieve downstream task adaptation. We demonstrate that the model can be adjusted through arithmetic operations of the task vector, and this adjustment is reflected in the Whisper. Furthermore, we can efficiently construct a generalized model by summing vectors. We set the direction of the model weight space for each multilingual language as the task vector to evaluate its effectiveness. We confirm that the task vector serves as a simple and effective approach for tuning downstream tasks in ASR using the Common Voice multilingual dataset.