Most modern speaker verification systems produce uncalibrated scores at their output. That is, while these scores contain valuable information to separate the same-speaker from the different-speaker trials, they cannot be interpreted in absolute terms, only relative to their distribution. A calibration stage is usually applied to the output of these systems to convert them into useful absolute measures that can be interpreted and reliably thresholded to make decisions. In this keynote, we will review the definition of calibration, present ways to measure it, discuss when and why we should care about it, and show different methods that can be used to fix calibration when necessary.