Traditional synthesis systems often rely on a large set of rules and a hand-crafted set of synthesis parameters in order to produce output speech. Gathering the synthesis parameters and developing the rule set are very labour intensive tasks. This paper offers an alternative to these labour intensive tasks. A set of artificial neural networks (ANNs) are used to produce the filter parameters which drive a synthesiser. This set of ANNs is trained on data that is gathered fully automatically. The networks offer a storage-efficient means of synthesis without the need for explicit rule enumeration. The networks have the capability to produce temporal variation within a phonetic segment and differing outputs when input contexts are varied. Furthermore, the distributed architecture of the networks enables them to produce reasonable outputs when faced with novel inputs. In addition, a feedback mechanism incorporated into the architecture creates smooth transitions at segment boundaries.