Over the last decade, learning how to pronounce written words by analogy has received considerable attention in psycholinguistic circles thanks to its cognitive plausibility and flexibility [1,2]. Computational models of analogy-based learning have been developed to probe the realism of this hypothesis, and discover the nature and function of analogising factors [3,4,5]. However, comparatively little effort has been put into an overall assessment of how well analogy works in dealing with specific NLP tasks, in particular with respect to its computational tractability and complexity [6], and its level of accuracy. Although it is well known that some form of analogy-based reasoning is at the root of how children learn to read written words aloud, at the moment we do not yet know with the same certainty how correct pronunciation by analogy is when compared with rule-governed pronunciation. In this paper we intend to show that pronunciation by analogy is not only a psycholinguistically realistic cognitive hypothesis, but also an extremely reliable alternative to rule-based approaches to text-to-speech conversion.