Existing deep learning-based speech enhancement (SE) methods typically entail high computational complexity. In this paper, we propose to split the input audio into adjacent equally spaced sub-band signals by an analysis filter bank, and feed these sub-band signals into a SE model to recover the denoised sub-band signals. These denoised sub-band signals are then reconstructed back to the full-band signal by a synthesis filter bank. Meanwhile, we design a full-band information fusion module to complement the sub-band feature with full-band spectral information. We also devise a full-band spectrum prediction module to predict the target full-band spectrum, which assists model training. Additionally, a pseudo noisy waveform reconstruction (PNWR) loss is introduced for better SE performance. Experiments demonstrate that the proposed scheme reduces the computational volume by about half with nearly no performance loss. The final SE system (Sub-PNWR) outperforms the current advanced methods.