Generative models have shown strong performance in speech enhancement, and consistency models further improve both speed and quality. Building upon these improvements, we propose an SNR adaptation framework that dynamically aligns the diffusion timestep with the SNR of the input signal, enhancing robustness in diverse noise conditions. In our framework, the reverse process is conditioned on a diffusion timestep that is adjusted based on the estimated SNR, while the additive Gaussian noise is modulated according to the same SNR estimate. This design enables a continuous SNR-conditioning mechanism in which the diffusion timestep serves as an SNR control parameter, allowing the model to adjust its enhancement process based on the input SNR. Experimental results demonstrate that our proposed framework consistently improves perceptual quality, with even greater improvements observed under challenging SNR conditions, highlighting its effectiveness.