We describe an improved modulation-domain loss for deeplearning- based speech enhancement systems (SE). We utilized a simple self-supervised speech reconstruction task to learn a set of spectro-temporal receptive fields (STRFs). Similar to the recently developed spectro-temporal modulation error, the learned STRFs are used to calculate a weighted mean-squared error in the modulation domain for training a speech enhancement system. Experiments show that training the SE systems using the improved modulation-domain loss consistently improves the objective prediction of speech quality and intelligibility. Additionally, we show that the SE systems improve the word error rate of a state-of-the-art automatic speech recognition system at low SNRs.