We present an experiment in human-machine spoken language interaction to determine whether subjects made use of diagnostic error messages. These experiments were performed within an air travel domain, using data from 40 subjects who were asked to solve travel planning scenarios by querying a database of airline schedules and fares. A human transcriber typed input to a natural language back-end, which did the remainder of the processing. When the back-end could not process the input, it issued one of several types of diagnostic message, nagging an unknown word, a sequence it couldn't parse, or a failure to retrieve information from the database. First, we classified each error message as to whether it was used to rephrase the following query; second, we determined whether the set of error messages in a discourse segment led to eventual recovery (defined as getting an answer to the original query). Our analysis showed that speakers almost always (86% of the time) made use of the error message in forming their next query. The analysis also showed that subjects could recover (get an answer to their query) in most cases (79%), even though they received an initial error message. We then discuss the effects of a more aggressive understanding strategy, and the problems of error detection and correction in a fully automated system which uses automatic speech recognition rather than a human transcriber.