Recent deep neural network (DNN) models have achieved high performance in speech enhancement. However, deploying such complex models in resource-constrained environments can be challenging without significant performance degradation. Knowledge distillation (KD), a technique where a smaller (student) model is trained to mimic the behavior of a larger, more complex (teacher) model, has emerged as a popular approach to address this challenge. In this paper, we propose a feature-augmentation based knowledge distillation method for speech enhancement, leveraging the information stored in the intermediate latent features of the DNN teacher model to train a smaller, more efficient student model. Experimental results on VoiceBank+DEMAND dataset demonstrate the effectiveness of the proposed knowledge distillation method for speech enhancement.