Most audio datasets utilized for training in the audio generation fields are low-quality, leading to difficulties in the generation of high-quality, single-event audio. However, to acquire single-event audio with noise-free, high costs are incurred. In this paper, we propose a simple retrieval-augmented classifier-guided sampling strategy for foley sound synthesis. Specifically, to guide the diffusion model during sampling with classifier guidance, given an input class, we first retrieve relevant audio features by utilizing a Contrastive Language-Audio Pretraining model. The gradients from a classifier for the retrieved audio features are then calculated to serve as additional guidance. Our evaluation, conducted on the DCASE 2023 challenge task 7 dataset, demonstrates that our proposed method overall improves a Frechet audio distance score.