ISCA Archive Interspeech 2023
ISCA Archive Interspeech 2023

Accurate and Structured Pruning for Efficient Automatic Speech Recognition

Huiqiang Jiang, Li Lyna Zhang, Yuang Li, Yu Wu, Shijie Cao, Ting Cao, Yuqing Yang, Jinyu Li, Mao Yang, Lili Qiu

Automatic Speech Recognition (ASR) has seen remarkable advancements with deep neural networks, such as Transformer and Conformer. However, these models typically have large model sizes and high inference costs, posing a challenge to deploy on resource-limited devices. In this paper, we propose a novel compression strategy that leverages structured pruning and knowledge distillation to reduce the model size and inference cost of the Conformer model while preserving high recognition performance. Our approach utilizes a set of binary masks to indicate whether to retain or prune each Conformer module, and employs L0 regularization to learn the optimal mask values. To further enhance pruning performance, we use a layerwise distillation strategy to transfer knowledge from unpruned to pruned models. Our method outperforms all pruning baselines on the widely used LibriSpeech benchmark, achieving a 50% reduction in model size and a 28% reduction in inference cost with minimal performance loss.


doi: 10.21437/Interspeech.2023-809

Cite as: Jiang, H., Zhang, L.L., Li, Y., Wu, Y., Cao, S., Cao, T., Yang, Y., Li, J., Yang, M., Qiu, L. (2023) Accurate and Structured Pruning for Efficient Automatic Speech Recognition. Proc. INTERSPEECH 2023, 4104-4108, doi: 10.21437/Interspeech.2023-809

@inproceedings{jiang23d_interspeech,
  author={Huiqiang Jiang and Li Lyna Zhang and Yuang Li and Yu Wu and Shijie Cao and Ting Cao and Yuqing Yang and Jinyu Li and Mao Yang and Lili Qiu},
  title={{Accurate and Structured Pruning for Efficient Automatic Speech Recognition}},
  year=2023,
  booktitle={Proc. INTERSPEECH 2023},
  pages={4104--4108},
  doi={10.21437/Interspeech.2023-809},
  issn={2958-1796}
}