Deep learning-based automated audio captioning (AAC) systems describe audio well, yet they often overfit to reference styles. To address this, reinforcement learning (RL) techniques have been adopted to directly optimize evaluation metrics, but these methods often suffer from word repetition and contextual distortion. Embedding-based rewards, such as those derived from contrastive language-audio pretraining (CLAP), may bias the model toward specific words or phrases that human evaluators find unnatural. In this paper, we propose a novel reward system that combines a CLAP-based reward with a repetition penalty (CRRP) and a large language model (LLM) evaluator. CRRP computes rewards using CLAP similarity, applies a repetition penalty and reward clipping to stabilize training, and uses LLM feedback to enhance naturalness. Our method shows outstanding performance in semantic evaluations and both human and AI-based assessments.