ISCA Archive Interspeech 2023
ISCA Archive Interspeech 2023

How ChatGPT is Robust for Spoken Language Understanding?

Guangpeng Li, Lu Chen, Kai Yu

Large language models (LLMs), e.g. ChatGPT, have shown super performance on various NLP tasks. There is a doubt whether these LLMs, which are trained on a large corpus of written text, can show the robustness of understanding the spoken text. Therefore in this paper, we give a detailed investigation of robust spoken language understanding (SLU) with ChatGPT. In our experiments, we evaluate ChatGPT on two sets of public datasets, Spoken SQuAD and ASR-GLUE, in which there are ASR errors in the text. Quantitative and qualitative analyses on the experimental results are conducted to show that ChatGPT not only performs very well for SLU tasks but also can recover some ASR errors with its super reason ability.