We present an analysis of the classification backends of the ABC submission for the audio tracks of the NIST 2024 Speaker Recognition Evaluation (SRE24). Our analysis covers embedding pre-processing, classification and score-level normalization, calibration and fusion strategies adopted to cope with the source, language and duration mismatch challenges of SRE24. We show that Pairwise Support Vector Machines provide the best results, which can be further improved, for single frontends, through score-level fusion of additional classifiers. We also show that condition-aware score calibration can mitigate the effects of source mismatch, whereas score normalization methods proved ineffective. Finally, we show that generative calibration is able to achieve competitive results with respect to other approaches.