In this work we introduce the utilization of Fujisakis modeling of pitch contour for the task of emotion recognition. For the evaluation of the proposed features we have employed a decision tree as well as an instance based learning algorithm. The datasets utilized for training the classification models, were extracted from two emotional speech databases. Results showed that knowledge extracted from Fujisakis modeling of intonation benefited all resulted emotion recognition models. Thus, an average raise of 9,52% in the total accuracy of all approaches was achieved.