ISCA Archive Interspeech 2014
ISCA Archive Interspeech 2014

Sequence error (SE) minimization training of neural network for voice conversion

Feng-Long Xie, Yao Qian, Yuchen Fan, Frank K. Soong, Haifeng Li

Neural network (NN) based voice conversion, which employs a nonlinear function to map the features from a source to a target speaker, has been shown to outperform GMM-based voice conversion approach [4–7]. However, there are still limitations to be overcome in NN-based voice conversion, e.g. NN is trained on a Frame Error (FE) minimization criterion and the corresponding weights are adjusted to minimize the error squares over the whole source-target, stereo training data set. In this paper, we use the idea of sentence optimization based, minimum generation error (MGE) training in HMM-based TTS synthesis, and modify the FE minimization to Sequence Error (SE) minimization in NN training for voice conversion. The conversion error over a training sentence from a source speaker to a target speaker is minimized via a gradient descent-based, back propagation (BP) procedure. Experimental results show that the speech converted by the NN, which is first trained with frame error minimization and then refined with sequence error minimization, sounds subjectively better than the converted speech by NN trained with frame error minimization only. Scores on both naturalness and similarity to the target speaker are improved.


doi: 10.21437/Interspeech.2014-448

Cite as: Xie, F.-L., Qian, Y., Fan, Y., Soong, F.K., Li, H. (2014) Sequence error (SE) minimization training of neural network for voice conversion. Proc. Interspeech 2014, 2283-2287, doi: 10.21437/Interspeech.2014-448

@inproceedings{xie14b_interspeech,
  author={Feng-Long Xie and Yao Qian and Yuchen Fan and Frank K. Soong and Haifeng Li},
  title={{Sequence error (SE) minimization training of neural network for voice conversion}},
  year=2014,
  booktitle={Proc. Interspeech 2014},
  pages={2283--2287},
  doi={10.21437/Interspeech.2014-448},
  issn={2308-457X}
}