Multimodal emotion recognition (MER), particularly using speech and text, is promising for enhancing human-computer interaction. However, the efficacy of such systems is often compromised by inaccuracies introduced during the automatic speech recognition (ASR) process. Addressing this, we present a comprehensive MER system that incorporates ways to make up for errors in ASR-generated text. Our system capitalizes on the strengths of speech signals and ASR-generated text, employing a cross-modal transformer (CMT) to blend these modalities effectively. We introduce a novel error compensation technique to counteract the detrimental effects of ASR inaccuracies and employ preference learning to fine-tune a large language model (LLM), thus improving its ability to distinguish slight emotional nuances in text. Performance of our proposed MER system is evaluated on the IEMOCAP dataset, demonstrating significant advancements in emotion recognition accuracy over conventional methods.