In this paper, we propose a parameter-efficient fine-tuning method to tailor a pre-trained model for speaker verification. The proposed method simultaneously considers adapter tuning and prompt tuning in one framework. Instead of conventional static prompts, we first insert a prompt generator between two neighboring transformer layers of the pre-trained model, which can incorporate utterance-specific clues to dynamically generate instance-aware prompts. Meanwhile, we append parallel adapter branches to the multi-head attention and feed-forward modules in the transformer layers in order to capture speaker-related information. Experimental results on the VoxCeleb datasets demonstrate the superiority of our method in case of updating fewer than 10% of the parameters.