In this paper, we propose a method of perceptually optimizing the deep neural network (DNN)-based speech coder using multi-time-scale perceptual loss functions. We utilize a psychoacoustic model (PAM) to measure a perceptual distortion. Perceptual optimization is performed using losses based on a frame-wise global distortion and subframe-wise local distortions. To this end, the input frame is divided into seven subframes, and quantization noise spectra and global masking thresholds (GMTs) are estimated both frame-wise and subframe-wise and combined. The proposed optimization method was tested on a baseline DNN speech coder comprising stacks of Resnet-type gated linear units (ResGLUs). We employed a uniform noise model for the quantizer at the bottleneck. Test results showed that the proposed coder could control the quantization noise globally and locally so that it achieved higher perceptual quality than AMR-WB and OPUS, especially at a low bitrate.