Despite the remarkable success of convolutional neural networks (CNNs) in voiceprint recognition, we still lack a comprehensive understanding of the specific features extracted by these models. To address this issue, we adopt an attribution approach in this paper to explain the voiceprint identification model and visualize the relevant features. Using five attribution methods, we successfully identify the features extracted by the ECAPA-TDNN model and confirm the reliability of our attribution techniques.We also explore two distinct methods for visualizing voiceprint features, with one approach aimed at interpreting features in unknown speech and the other focused on known speech. Through the attribution method, we are able to more precisely capture voiceprint features within speech data without significantly impacting the performance of the voiceprint recognition model. It would help us to do a more detailed study of the voiceprint features in the future.