ISCA Archive Interspeech 2014
ISCA Archive Interspeech 2014

Feature space maximum a posteriori linear regression for adaptation of deep neural networks

Zhen Huang, Jinyu Li, Sabato Marco Siniscalchi, I-Fan Chen, Chao Weng, Chin-Hui Lee

We propose a feature space maximum a posteriori (MAP) linear regression framework to adapt parameters for context dependent deep neural network hidden Markov models (CD-DNN-HMMs). Due to the huge amount of parameters used in DNN acoustic models in large vocabulary continuous speech recognition, the problem of over-fitting can be severe in DNN adaptation, thus often impair the robustness of the adapted DNN model. Linear input network (LIN) as a straight-forward feature space adaptation method for DNN, similar to feature space maximum likelihood linear regression (fMLLR), can potentially suffer from the same robustness situation. The proposed adaptation framework is built based on MAP estimation of the LIN parameters by incorporating prior knowledge into the adaptation process. Experimental results on the Switchboard task show that against the speaker independent CD-DNN-HMM systems, LIN provides 4.28% relative word error rate reduction (WERR) and the proposed fMAPLIN method is able to provide further 1.15% (totally 5.43%) WERR on top of LIN.


doi: 10.21437/Interspeech.2014-500

Cite as: Huang, Z., Li, J., Siniscalchi, S.M., Chen, I.-F., Weng, C., Lee, C.-H. (2014) Feature space maximum a posteriori linear regression for adaptation of deep neural networks. Proc. Interspeech 2014, 2992-2996, doi: 10.21437/Interspeech.2014-500

@inproceedings{huang14f_interspeech,
  author={Zhen Huang and Jinyu Li and Sabato Marco Siniscalchi and I-Fan Chen and Chao Weng and Chin-Hui Lee},
  title={{Feature space maximum a posteriori linear regression for adaptation of deep neural networks}},
  year=2014,
  booktitle={Proc. Interspeech 2014},
  pages={2992--2996},
  doi={10.21437/Interspeech.2014-500},
  issn={2308-457X}
}