The paper describes a framework for a data-driven approach to the design of text-to-speech synthesis systems, termed speech synthesis from recognition or SYNFREC. This offers an alternative to the conventional rule-based approach where introspection is used both in setting up the rules and adjusting them. The method involves training the synthesiser by connecting its output to the input of a speech recogniser and using data provided by the various levels of the recogniser to train conversion and controller neural networks in the synthesiser system; a completely automatic and data-driven procedure. A very simple example of SYNFREC is given which implements vowel synthesis from symbols.