Data-driven speech enhancement (Fingscheidt and Suhadi [1]) aims at improving speech quality for voice calls in a specific noise environment. The essence of the method are a set of frequency-dependent weighting rules, indexed by a priori and a posteriori SNRs, which are learned from clean speech and background noise training data. The weighting rules must be stored for each frequency bin separately and take up about 400 kBytes memory, which makes DSP implementations relatively expensive.
In this paper we propose an alternative definition of the weighting rules which requires only 27 kBytes memory. That is 6.7% of the memory consumption of the original algorithm, with virtually no loss in performance measured in terms of speech distortion and noise attenuation. Our approach is to redefine the weighting rules on the Bark scale and store their parametric representation obtained by polynomial curve fitting.