ISCA Archive Interspeech 2022
ISCA Archive Interspeech 2022

ACNN-VC: Utilizing Adaptive Convolution Neural Network for One-Shot Voice Conversion

Ji Sub Um, Yeunju Choi, Hoi Rin Kim

Voice conversion (VC) converts speaker characteristics of a source speaker to ones of a target speaker without modifying the linguistic content. To overcome limitations of the existing VC systems for target speakers unseen during training, many researchers have recently studied one-shot voice conversion. Although many papers have shown that voice conversion can be performed even with only one utterance of an unseen target speaker, it sounds still far from the target speaker's voice. To enhance the similarity of the generated speech, we implement an adaptive convolution neural network (ACNN) for the voice conversion system in two ways. Firstly, we utilize ACNNs with a normalization method to adapt speaker-related information in denormalization process. The second method is to build an architecture with ACNNs that have various receptive fields to generate a voice closer to the target speaker while considering temporal patterns. We evaluated two methods through objective and subjective evaluation metrics. Results show that the converted speech is better than the previous methods in terms of the speaker similarity while keeping the naturalness score.