The automatic recognition of children's speech is well known to be a challenge, and so is the influence of affect that is believed to downgrade performance of a speech recogniser. In this contribution, we investigate the combination of these phenomena: extensive test-runs are carried out for 1k vocabulary continuous speech recognition on spontaneous angry, motherese and emphatic children's speech as opposed to neutral speech. The experiments mainly address the questions how specific emotions influence word accuracy, and whether neutral speech material is sufficient for training as opposed to matched conditions acoustic model adaptation. In the result emphatic and angry speech are best recognised, while neutral speech proves a good choice for training. For the discussion of this effect we further visualise emotion distribution in the MFCC space by Sammon transformation.