Continuous speech production is a highly complex process involving many parts of the human brain. To date, no fundamental representation that allows for decoding of continuous speech from neural signals has been presented. Here we show that techniques from automatic speech recognition can be applied to decode a textual representation of spoken words from neural signals. We model phones as the fundamental unit of the speech process in invasively measured brain activity (intracranial electrocorticographic (ECoG)) recordings. These phone models give insights into timings and locations of neural processes associated with the continuous production of speech and can be used in a speech recognizer to decode the neural data into their textual representations. When restricting the dictionary to small subsets, Word Error Rates as low as 25% can be achieved. As the brain activity data sets are fairly small, alternative approaches to Gaussian models are investigated by relying on robust, regularized discriminative models.