ISCA Archive Interspeech 2024
ISCA Archive Interspeech 2024

Streamlining Speech Enhancement DNNs: an Automated Pruning Method Based on Dependency Graph with Advanced Regularized Loss Strategies

Zugang Zhao, Jinghong Zhang, Yonghui Liu, Jianbing Liu, Kai Niu, Zhiqiang He

In the burgeoning field of speech enhancement, the quest for high-performing deep neural networks(DNNs) often grapples with the challenge of increased computational demand and model size. This study unveils a novel structured pruning method that optimizes model via Dependency Graph, achieving automatic dimension reduction of network layers without manual settings of pruning ratios—a milestone not previously accomplished. Additionally, we propose a regularized loss strategy that adapts to variable scale sparsity, enhancing compression efficiency. Through extensive experiments, we demonstrate our method's ability to achieve substantial reductions in model size and computational costs while maintaining performance. Notably, Our findings also question the utility of grouping trick in linear layers, suggesting it may impede effective pruning. This research not only propels forward the capabilities of speech enhancement DNN compression, but also enriches the discourse on pruning methodology.