ISCA Archive Interspeech 2022
ISCA Archive Interspeech 2022

Convolutional Recurrent Neural Network with Auxiliary Stream for Robust Variable-Length Acoustic Scene Classification

Joon-Hyuk Chang, Won-Gook Choi

Deep learning has proven to be suitable for acoustic scene classification (ASC). Therefore, it exhibits significant improvement in performance while using neural networks. However, several studies have been performed using convolutional neural network (CNN) rather than recurrent neural network (RNN) or convolutional recurrent neural network (CRNN), even though acoustic scene data is treated as a temporal signal. In practice, CRNNs are rarely adopted and are ranked lower in recent detection and classification of acoustic scenes and events (DCASE) challenges for fixed-length (i.e., 10 s) ASC. In this paper, an auxiliary stream technique is proposed that can improve the performance of CRNNs compared with that of CNNs by controlling the inductive bias of RNN. The auxiliary stream trains CNN by effectively extracting embeddings and is only connected on training steps. Therefore, it does not affect the model complexity on the inference steps. The experimental results demonstrate the superiority of the proposed method, regardless of the CNN model used for CRNN. Additionally, the proposed method yields robustness on variable-length ASC by performing streaming inferences and demonstrates the importance of CRNN.