Recently, with the rapid development of deep learning, the performance of Monaural speech enhancement (SE) in terms of intelligibility and speech quality has been significantly improved. In time-frequency (TF) domain, we generally use convolutional neural networks (CNN) to predict the mask from the noisy amplitude spectrum to the pure amplitude spectrum. Deep complex convolution recurrent network (DCCRN) uses the algorithm of complex numbers to process convolutional networks and long short-term memory(LSTM), and has achieved good results. However, LSTM can only model short time frames, and its performance is often not good enough when processing information on longer time frames. The single convolution kernel size of encoder-deocder also limits the ability of model to extract and restore features. In this paper, we design a new network to handle these problems, called Deep Complex Temporal Convolution (DCTCN), where temporal convolution network (TCN) using the rule of complex calculation. The Encoder and Decoder use selective kernel network (SkNet) to capture multi-scale receptive field in the encoding and decoding phase. On the TIMIT and VoiceBank+DEMAND datasets, our model obtains very competitive results in semi causal and non causal tasks compared with previous models.