Emotional states are communicated by seeing the face as well as by listening to affective prosody. The two inputs can also be present at the same time and are then processed concurrently. The present experiment examines the role of the voice on the upper versus lower halves of a face. Previous research using an angry-fear facial expression continuum showed that recognition of the lower half of a face was nearly at chance level. Our experiment asked whether in these circumstances the impact of the voice would be the same for both face halves. The results showed that the cross-modal effect of the voice was the same for the two face conditions.