This study investigates the efficacy of utilizing embedding spaces to model phonetic information in emotion utterances for speech emotion recognition. Our approach involves implicit modeling of phone information by deriving phone-based embeddings from networks specifically trained for phone recognition and pre-trained models fine-tuned for phone/character recognition. The results from evaluating our approach on three speech emotion databases, using both intra-corpus and inter-corpus evaluation methods demonstrate the competitive performance of implicit modeling of phonetic information compared to knowledge-based handcrafted features