ISCA Archive IberSPEECH 2018
ISCA Archive IberSPEECH 2018

Speech and monophonic singing segmentation using pitch parameters

Xabier Sarasola, Eva Navas, David Tavarez, Luis Serrano, Ibon Saratxaga

In this paper we present a novel method for automatic segmentation of speech and monophonic singing voice based only on two parameters derived from pitch: proportion of voiced segments and percentage of pitch labelled as a musical note. First, voice is located in audio files using a GMM-HMM based VAD and pitch is calculated. Using the pitch curve, automatic musical note labelling is made applying stable value sequence search. Then pitch features extracted from each voice island are classified with Support Vector Machines. Our corpus consists in recordings of live sung poetry sessions where audio files contain both singing and speech voices. The proposed system has been compared with other speech/singing discrimination systems with good results.