In Automatic Speech Recognition(ASR), the language model re-ranking based on unlabeled text can improve the performance and realize flexibly scene adaptation. The scheme of ASR re-ranking is usually to build a language model and then use it to reorder the speech recognition N-best hypotheses. Recently, BERT-based re-ranking has achieved impressive results, benefiting from the powerful modeling capability of contextual semantic. In the view of that BERT's non-autoregressive structure limits the calculation speed of the language model scores(perplexity, ppl), we use a classification method in prompt paradigm instead of the re-ranking method based on ppl. The prompt-based re-ranking scheme simplifies the pipeline of re-ranking as well as ensures the performance. Experiments on AISHELL-1 dataset show the effective of our proposed method. On the test set, the inference speed is accelerated by 49 times and compared to baseline the Character Error Rate(CER) is relatively decreased by 13.51%~14.43%.