Speech classification systems based on deep learning are vulnerable to backdoor attacks, causing the model's predictions to deviate from normal behavior. Existing speech backdoor methods often produce poisoned samples by perceptible modifications, which reduce the stealthiness of the attack and make it easier to detect. To improve stealthiness, this paper proposed the Latent Rearrangement Backdoor Attack (LRBA), a novel backdoor attack framework utilizing the latent space in a pre-trained VITS model to achieve an imperceptible attack. Explicitly, we manipulate the latent representations by utilizing the normalizing flow of VITS to generate rearranged utterances, where the rearranged semantics can be associated with the attacker's specific target label, achieving a backdoor attack. Results show that our method achieves an excellent attack success rate with a very low poisoning rate and maintains a high mean opinion score, outperforming existing methods in effectiveness and stealthiness.