Automated audio captioning (AAC) is a crucial task in machine perception within the audio domain. AAC struggles to interpret and incorporate temporal relationships of sound events in captions. However, existing studies often fail to capture the temporal relationship, leading to incorrect captions. Some recent studies leverage sound event detection models to extract temporal relationships but remain limited by their dependence on independent pre-trained models. In this study, we propose Temp4Cap, a novel AAC framework that directly trains temporal alignment via contrastive learning, using the “temporal caption” generated by a large language model. To capture temporal relationships, we apply a temporal negative sampling strategy, which includes event- and order-level shuffle and random substitution when generating negative samples during contrastive learning. Experimental results on Clotho and AudioCaps show that Temp4Cap significantly improves both captioning and temporal metrics.