ISCA Archive Interspeech 2015
ISCA Archive Interspeech 2015

A two-stage singing voice separation algorithm using spectro-temporal modulation features

Frederick Z. Yen, Mao-Chang Huang, Tai-Shih Chi

A two-stage singing voice separation algorithm using spectro-temporal modulation features is proposed in this paper. First, music clips are transformed into auditory spectrograms and the spectral-temporal modulation contents of all time-frequency (T-F) units of the auditory spectrograms are extracted using an auditory model. Then, T-F units are sequentially clustered using the expectation-maximization (EM) algorithm into percussive, harmonic and vocal units through the proposed two-stage algorithm. Lastly, the singing voice is synthesized from clustered vocal T-F units via time-frequency masking. The algorithm was evaluated using the MIR-1K dataset and demonstrated better separation results than our previously proposed one-stage algorithm.