Besides background noise, channel effect and speakers health condition, emotion is another factor which may influence the performance of a speaker verification system. In this paper, the performance of a GMM-UBM based speaker verification system on emotional speech is studied. It is found that speech with various emotions aggravates the verification performance. Two reasons for the performance aggravation are analyzed, they are mismatched emotions between the speaker models and the test utterances, and the articulating styles of certain emotions which create intense intra-speaker vocal variability. In response to the first reason, an emotion-dependent score normalization method is proposed, which is borrowed from the idea of Hnorm.