SpeechFlow is a powerful speech factorization model based on information bottleneck (IB), and its effectiveness has been reported by several studies. A potential problem of SpeechFlow, however, is that if the IB channels are not well designed, the resultant factors cannot be well disentangled. In this study, we propose a CycleFlow model that combines random factor substitution and cycle consistency loss to solve this problem. Theoretical analysis shows that the novel approach enforces independent information codes without sacrificing reconstruction loss. Experiments on voice conversion tasks demonstrate that this simple technique can effectively reduce mutual information between codes, and produce clearly better conversion than the vanilla SpeechFlow.