Recently, acoustic models based on deep neural networks (DNNs) have been introduced and showed dramatic improvements over acoustic models based on GMM in a variety of tasks. In this paper, we considered the improvement of noise robustness of DNN. Inspired by Missing Feature Theory and static noise aware training, we proposed an approach that uses a noise-suppressed acoustic feature and estimated noise information as input of DNN. We used simple Spectral Subtraction as noise-suppression. As noise estimation, we used estimation per utterance or frame. In noisy speech recognition experiments, we compared the proposed method with other methods and the proposed method showed the superior performance than the other approaches. For noise estimation per utterance with log Mel Filterbank, we obtained 28.6% word error rate reduction compared with multi condition training, 5.9% reduction compared with noise adaptive training.