Affective speech synthesis is an active research area, but re cent approaches usually lack the full, fine-grained controllabil ity to produce utterances with any exact affect intended by the user. We propose a puppetry tool based on FastPitch to help model output convey any required suprasegmental meanings. Users can choose any trained FastPitch model, and which fea tures should be mimicked, making the approach fine-grained and language-independent.