This paper shows how syntactic neural networks can be applied to the problem of translating orthographic strings to phonetics strings. The work has two novel aspects. First, the model is symmetric and so is also capable of phonetics-text translation. Second, although training is based on a set of whole-word orthographic/phonetic symbol-string pairs, it is unsupervised in the sense that no segmentation information is included. The training data consists of a (randomly-selected) subset of N mono-syllabic pairs extracted from the machine-readable Oxford Advanced Learners' Dictionary. The trained nets were tested on the training subset and an equal size (disjoint) test-set. Early results show that translation accuracy - as assessed by the Levenstein distance between the network's output and dictionary transcription - is asymptotic to 50%, for both seen and unseen words, as TV increases.