In human-computer call dialogues, human callers often get frustrated or angry due to, e.g., the computers mistakes. Detecting such emotions would be beneficial for many purposes; nevertheless, emotion detection so far has been studied primarily as a classification task. Taking a step forward from classifying emotions from a single utterance, this paper investigates whether emotions, detected by prosodic features, can be used practically at the dialogue level, i.e., for monitoring human-computer dialogues to detect BAD calls requiring human agents assistance. We first show emotion detection can be improved by a regression model. In combining emotion detection and dialogue monitoring, we demonstrate decision level fusion is better than feature level fusion. Our experiments also confirm that NEGATIVE emotions may be a sufficient, but not a necessary condition for detecting BAD calls. Finally, we show that BAD calls due to callers NEGATIVE emotions may be identified by other clues.