We address the iterative refinement of classifier decisions for recognition of intoxication, sleepiness, age and gender from speech. The nature of these tasks as being emedium-termf or elong-termf, as opposed to short-term states such as emotion, makes it possible to collect cumulative evidence in the form of utterance level decisions; we show that by fusing these decisions along the time axis, more and more reliable decisions can be gained. In extensive test runs on three official INTERSPEECH Challenge corpora, we show that the average recall can be improved by up to 5%, 6%, 10% and 11% absolute by longer-term observation of speaker sleepiness, gender, intoxication, and age, respectively, compared to the accuracy of a decision from a single utterance.