ISCA Archive Interspeech 2014
ISCA Archive Interspeech 2014

Voice conversion using generative trained deep neural networks with multiple frame spectral envelopes

Ling-Hui Chen, Zhen-Hua Ling, Li-Rong Dai

This paper presents a deep neural network (DNN) based spectral envelope conversion method. A global DNN is employed to model the complex non-linear mapping relationship between the spectral envelopes of source and target speakers. The proposed DNN is generatively trained layer-by-layer by cascade of two restricted Boltzmann machines (RBMs) and a bidirectional associative memory (BAM), which are considered as generative models estimated using the contrastive divergence algorithm. Further, multiple spectral envelopes are adopted instead of dynamic features for better modeling using the DNN. The superiority of the proposed method is validated by the subjective experimental results.


doi: 10.21437/Interspeech.2014-188

Cite as: Chen, L.-H., Ling, Z.-H., Dai, L.-R. (2014) Voice conversion using generative trained deep neural networks with multiple frame spectral envelopes. Proc. Interspeech 2014, 2313-2317, doi: 10.21437/Interspeech.2014-188

@inproceedings{chen14c_interspeech,
  author={Ling-Hui Chen and Zhen-Hua Ling and Li-Rong Dai},
  title={{Voice conversion using generative trained deep neural networks with multiple frame spectral envelopes}},
  year=2014,
  booktitle={Proc. Interspeech 2014},
  pages={2313--2317},
  doi={10.21437/Interspeech.2014-188},
  issn={2308-457X}
}