Retrieve-based dialogue response selection aims to find a proper response from a candidate set given a multi-turn context. The sequence representations generated by pre-trained language models (PLMs) play key roles in the learning of matching degree between the dialogue contexts and the responses. However, context-response pairs sharing the same context but different responses tend to have a greater similarity in the sequence representations calculated by PLMs, which makes it hard to distinguish positive responses from negative ones. Motivated by this, we propose a novel Fine-Grained Contrastive (FGC) learning method for the response selection task based on PLMs. This FGC learning strategy helps PLMs to generate more distinguishable pair representations of each dialogue at fine grains, and further make better predictions on choosing positive responses. Empirical studies on two benchmark datasets demonstrate that the proposed FGC learning method can generally and significantly improve the model performance of existing PLM-based matching models.