ISCA Archive Interspeech 2022
ISCA Archive Interspeech 2022

CTRL: Continual Representation Learning to Transfer Information of Pre-trained for WAV2VEC 2.0

Jae-Hong Lee, Chae-Won Lee, Jin-Seong Choi, Joon-Hyuk Chang, Woo Kyeong Seong, Jeonghan Lee

Representation models such as WAV2VEC 2.0 (W2V2) show remarkable speech recognition performance by pre-training only on unlabeled datasets and finetuning on a small amount of labeled dataset. It is crucial to train on datasets of multiple domains to obtain a richer representation of such a model. The conventional approach used for handling multiple domains is training a model on a merged dataset from scratch. However, representation learning requires excessive computation for pre-training, which becomes a severe problem as the size of the dataset increases. In this study, we present continual representation learning (CTRL), a framework that leverages continual learning methods to continually retrain the pre-trained representation model while transferring information of the previous model without the historical dataset. The framework conducts continual pre-training for pre-trained W2V2 using the redesigned continual learning method for self-supervised learning. To evaluate our framework, we continually pre-train W2V2 with CTRL in the following order: Librispeech, Wall Street Journal, and TED-LIUM V3. The results demonstrate that the proposed approach improves the speech recognition performance of all three datasets compared with that of baseline W2V2 pre-trained on Librispeech.