How to develop lightweight systems customized for mobile devices is an urgent and intriguing topic for speaker verification. In this paper, we investigate extremely low bit quantization for small-footprint speaker verification. Specifically, two different binary quantization schemes are proposed, namely static and adaptive quantizer. By applying them to the pre-trained full-precision ResNet, we successfully obtain binarized variants named as b-vector with a model size of under 1MB memory. Experiments on Voxceleb dataset illustrate that compared with the previous best small-footprint system, our best b-vector system achieves 38%, 36% and 30% relative improvements on Vox1-O, E and H respectively, while maintaining almost identical model size. In addition, the analysis of the binarized weight histograms reveals that adaptive quantization scheme, when compared to the static method, can better match the real-valued distribution, and hence presents more effective representation ability.