Attention-based models have achieved new state-of-the-art in many tasks while the computational cost of these models increases drastically compared with previous methods. For most acoustic tasks, excessively long speech sequences exacerbate this problem and do not benefit a lot from the attention mechanism. We propose the information magnitude (IM) based dynamic stride convolution (IM-DSC) method. This method first calculates the IM according to the importance of each frame, then dynamically squeezes the redundant frames. We carry on experiments on speech translation and automatic speech recognition tasks. Our results show that we achieve 0.5 BLEU and 0.4 BLEU improvements on the MuST-C En-De and En-Fr data with a 22% compression ratio. For the ASR task, we gain a 0.2 WER reduction with a 21% compression ratio on the Librispeech data.