This paper presents an approach to feature compensation for emotion recognition from speech signals. In this approach, the intonation groups (IGs) of the input speech signals are firstly extracted. The speech features in each selected intonation group are then extracted. With the assumption of linear mapping between feature spaces in different emotional states, a feature compensation approach is proposed to characterize the feature space with better discriminability among emotional states. The compensation vector with respect to each emotional state is estimated using the Minimum Classification Error (MCE) algorithm. For the final emotional state decision, the IG-based feature vectors compensated by the compensation vectors are used to train the Continuous Support Vector Machine (CSVMs) for each emotional state. The emotional state with the maximal output probability is determined as the final output. The kernel function of CSVM model is experimentally decided as Radial basis function and the experimental result shows that IG-based feature extraction and compensation can obtain encouraging performance for emotion recognition.