Speech emotion can add extra information to speech in comparison with available textual information. However, it can also lead to some problems in the automatic speech recognition process. We evaluated the changes in speech parameters, i.e. formant frequencies and pitch frequency, due to anger and grief for Farsi language in a former research. Here, using those results, we try to improve emotional speech recognition accuracy using baseline models. We show that adding parameters such as formant and pitch frequencies to the speech feature vector can improve recognition accuracy. The percentage of improvement depends on parameter type, number of mixture components and the emotional condition. Identification of the emotional condition can also help in improving speech recognition accuracy. To recognize emotional condition of speech, formant and pitch frequencies were used successfully in two approaches, namely maximum likelihood and GMM. Keywords: Prosody, Speech emotion, Speech recognition, emotion recognition.