The rapid development of neural text-to-speech (TTS) systems enabled their usage in other areas of natural language processing such as automatic speech recognition (ASR) or spoken language translation (SLT). There is a large number of different TTS architectures and corresponding extensions available. Thus, selecting which TTS systems to choose for synthetic data creation is not an easy task. We use the comparison of five different TTS decoder architectures in the scope of synthetic data generation to show the impact on CTC-based speech recognition training. We compare the recognition results to computable metrics like NISQA MOS and intelligibility, finding that there are no clear relations to the ASR performance. We also observe that for data generation auto-regressive decoding performs better than non-autoregressive decoding, and propose an approach to compare generalization capabilities of different TTS systems.