ISCA Archive Interspeech 2022
ISCA Archive Interspeech 2022

Adversarial Knowledge Distillation For Robust Spoken Language Understanding

Ye Wang, Baishun Ling, Yanmeng Wang, Junhao Xue, Shaojun Wang, Jing Xiao

In spoken dialog systems, Spoken Language Understanding (SLU) usually consists of two parts, Automatic Speech Recognition (ASR) and Natural Language Understanding (NLU). In practice, such decoupled ASR/NLU design is beneficial for fast model iteration on both components. However, it also leads to the problem that NLU model suffers from the errors introduced by ASR, which degrades the overall performance. Improving the NLU model through Knowledge Distillation (KD) from large Pre-trained Language Models (PLMs) is proved to be effective and has drawn a lot of attention recently. In this work, we propose a novel Robust Adversarial Knowledge Distillation (RAKD) framework by introducing adversarial training into knowledge distillation to improve the robustness of NLU model to ASR-error. We conduct experiments on our own built classification dataset from a real-world spoken dialog system as well as existing datasets, where our proposed framework is proved to yield significant improvement over competitive baselines.