The analysis of parameters extracted from speech data may contribute, together with other approaches, to the analysis and classification of a subject emotional status. Pitch value and variability have been shown to carry useful information to reach this goal, However the non stationarity of running speech and the short duration of utterances represent a difficulty for the estimation of these parameters. In this work a method based on a variation of the Sawtooth Waveform Pitch Estimator (SWIPE') to estimate pitch and jitter in vowel sound, is evaluated The performances of the approach are assessed on simulated datasets with varying signal to noise ratio and jitter values. Issues related to data length are introduced and discussed through simulations. A comparison of the approach performances with the Simplified Inverse Filtering Technique (SIFT) is presented. Preliminary results on vowels extracted from a database of emotional utterances are introduced.
Index Terms. Pitch, jitter, swipe', emotion, vowels