We introduce a framework for LLM-based human-in-the-loop ASR designed to enhance the quality of ASR transcripts, with a particular focus on accurately capturing named entities. A key contribution of this work is demonstrating that when LLMs are provided with high-quality, human-annotated transcript examples, even a small set can significantly improve WER and entity recall rates. Our framework drastically reduces the costly need for human annotation to just 5% of the entire call. Our framework outperforms all baselines, including out-of-the-box Whisper and Whisper with a zero-shot GPT corrector. In this work, we derive insights on how a chain-of-thought framework can effectively utilize LLM prompts and human input to improve speech data annotation quality. Using our framework, we achieve a 10% or greater relative improvement in WER and entity F1 score over the baseline with a minimal amount of human effort.