With Speech-to-Electromyography Generative Adversarial Network (STE-GAN), we propose a model which can synthesize Electromyography (EMG) signals from acoustic speech. We condition the generator network on representations of the spoken content obtained from a voice conversion model. Given these representations, the generator outputs an EMG signal corresponding to the articulated content of the acoustic speech in the setting of a specific EMG recording session. In comparison to previous work, STE-GAN directly generates EMG signals from acoustic speech. As it uses more speaker-independent content representations as input, it can synthesize EMG signals from speech of speakers who were unseen during training.