ISCA Archive Interspeech 2006
ISCA Archive Interspeech 2006

Forward-backwards training of hybrid HMM/BN acoustic models

Konstantin Markov, Satoshi Nakamura

In this paper, we describe an application of the Forward-Backwards (FB) algorithm for maximum likelihood training of hybrid HMM/Bayesian Network (BN) acoustic models. Previously, HMM/BN parameter estimation was based on a Viterbi training algorithm that requires two passes over the training data: one for BN learning and one for updating HMM transition probabilities. In this work, we first analyze the F-B training for a conventional HMM and show that the state PDF parameter estimation is analogous to weighted-data classifier training. The gamma variable of the Forward-Backwards algorithm plays the role of the data weight. From this perspective, it is straightforward to apply F-B-based training to the HMM/BN models since the BN learning algorithm allows training with weighted data. Experiments on accented speech (American, British and Australian English) show that F-B training outperforms the previous Viterbi learning approach and that the HMM/BN model achieved better performance than the conventional HMM.