In this paper, we study models of visually-grounded speech (VGS) in a few-shot setting. Beginning with a model that was pre-trained to associate natural images with speech waveforms describing the images, we probe the model's ability to learn to recognize novel words and their visual referents from a limited number of additional examples. We define new splits for the SpokenCOCO dataset to facilitate few-shot word and object acquisition, explore various few-shot fine-tuning strategies in an effort to mitigate the catastrophic forgetting phenomenon, and identify several techniques that work well in this respect.