We examine to what extent a knowledge-based model can recognise segmental structure without feedback from semantic information and without stochastic modelling. The system proposed is inspired by some features of human cognitive processing in that the speech signal activates parallel distributed processes of decoding. The modules, conceptually different, are: an automatic segmentation module. a first analytic recognition based on oriented graphs with state transitions. a second analytic recognition module based on phonetic rules. a global recognition based on metric methods. Finally, scrutiny of all the parallel results and access to a dictionary allow the inference rules to propose ranked word candidates.