Flow-based neural vocoders have demonstrated their effectiveness in generating high-fidelity speech in real-time. However, most flow-based vocoders are computationally heavy models which rely on large amounts of speech for model training. Witnessing the limitations of these vocoders, a new flowbased vocoder, namely Semi-inverse DynamicWaveFlow (SiDWaveFlow), for low-resource speech synthesis is proposed. SiDWaveFlow can generate high-quality speech in real-time with the constraint of limited training data. Specifically, in SiDWaveFlow, a module named Semi-inverse Dynamic Transformation (SiDT) is proposed to improve the synthesis quality as well as the computational efficiency by replacing the affine coupling layers (ACL) used in WaveGlow. In addition, a preemphasis operation is introduced to the training process of SiD-WaveFlow to further improve the quality of the synthesized speech. Experimental results have corroborated that SiDWaveFlow can generate speech with better quality compared with its counterparts. Particularly, the TTS system integrating SiD-WaveFlow vocoder achieves 3.416 and 2.968 mean MOS on CSMSC and LJ Speech data sets, respectively. Besides, SiDWaveFlow converges much faster than WaveGlow at the training stage. Last but not least, SiD-WaveFlow is a lightweight model and can generate speech on edge devices with a much faster inference speed. The source code and demos are available at https://slptongji.github.io/.