The global signal-to-noise ratio (gSNR) is defined as the ratio of speech energy to noise energy in whole noisy audio. However, due to the increase in noise interference, the generalization ability declines when the traditional features (e.g., raw waveforms and MFCCs) are fed directly to the statistical model to estimate a single fullband gSNR. In this paper, we propose a multi-subband-based gSNR estimation network (MSGNet). Specifically, we split the noisy speech waveforms into Bark-scale subbands to obtain higher resolution signals to the middle and low frequencies. Then, convolutional neural networks (CNNs) are used to learn a non-linear function to estimate the speech and noise energy ratio of each subband from the input muti-subband features. Finally, by integrating subbands with different speech and noise energies, gSNR in the fullband is calculated. Extensive experimental results on the AURORA-2J dataset demonstrate that the proposed MSGNet significantly reduces the mean absolute error compared to other baseline gSNR estimation methods.