Understanding soundscapes is essential for making sense of real-world scenarios in complex environments. However, real-life conditions present significant challenges for AI-based methods, particularly due to the highly imbalanced nature of Audio Tagging problems. In this work, we investigate the impact of data imbalance on the training dynamics of state-of-the-art models for Acoustic Scene Classification. Using the DCASE TAU Urban Acoustic Scenes 2022 dataset and the CP-Mobile model, we introduce controlled imbalance scenarios and analyze their effect through the Entropic Triangle framework. Our findings reveal that the training dynamics are strongly influenced by the chosen balancing approach. It also suggests longer training periods with conventional optimizer for re-balanced classification metrics. In this way, this study provides new insights into the role of entropy-based analysis in developing robust Acoustic Scene Classification systems for real-world applications.