Smart voice assistants that rely on automatic speech recognition (ASR) are widely used by people for multiple reasons. These devices, however, feature "always on" microphones that enable sensitive and private user information to be maliciously or inadvertently collected. In this paper, we develop an end-to-end approach that generates utterance-specific perturbations that obscure a set of words that have been deemed sensitive. In particular, spoken digits, which may be contained in credit card or social security numbers, have been chosen as the words that an ASR system should not be able to recognize, though all other words should be recognized accordingly. Our approach consists of a self-supervised learning feature extractor and U-Net style network for generating noise perturbations. The proposed approach shows promising performance that will help address privacy concerns, without affecting the main functionality of an ASR model.