Our voice enabled telecommunications service, Annie, is a prototyping system that gives users the ability to access a variety of telephone-based services by voice. The user interface of Annie uses an anthropomorphic "personal assistant" metaphor. The user can maintain a "conversation-like" dialog with Annie, but user input is limited by the grammar-constrained automatic speech recognition (ASR) technology used in the service. Because the grammars change depending on the state the user is in, the system must provide clear recognition feedback and orienting information throughout the dialog. Verbal recognition feedback is tedious and time-consuming for the frequent, expert user. This paper describes an experiment that explores the feasibility of providing non-verbal recognition feedback and orienting information through the use of earcons, or auditory icons. Users of Annie were exposed to five earcons presented in parallel with verbal recognition feedback for a minimum of five days. Subsequently, users were asked to recall the identity of each of the five earcons alone. Subjects were able to reliably recall each of the earcons. Since users could recall the earcons, it is feasible that the non-verbal earcons could replace the lengthier verbal recognition feedback.