Training large-scale speaker verification systems on consumer GPUs is difficult due to the memory consumption of the existing networks being proportional to the number of layers. In this paper, a novel family of Reversible Neural Networks (RevNets) is proposed for memory-efficient speaker verification. Specifically, we introduce two types of RevNets, namely partially and fully reversible networks, which alleviate the need to store activations in memory during back-propagation. Consequently, RevNets require nearly constant memory costs as the network depth increases. Experiments on Voxceleb show that RevNets achieve up to 15.7x memory savings, while maintaining nearly identical parameters and performance when compared to the vanilla ResNets. To our knowledge, this is the first work to investigate memory-efficient training for speaker verification. Our results indicate the potential of reversible networks as a more efficient backbone for resource-limited training scenarios.