Recent work in the field of human-computer interaction has highlighted the need for general and sophisticated speech recognition systems. Such systems require an initial analysis which adequately represents perceptually important features and a classification component that can cope with intra- and inter-speaker variability. In this study, a prototype isolated word recogniser was constructed, with an auditory-based analysis component and a pattern classification module based on a parallel distributed processing paradigm [1].