Over the past several decades, automatic speech recognition has made great progress through the application of statistics and machine learning, combined with perceptual and structural knowledge about speech and language, as well as its variability. This paper reviews some recent work that applies some of these approaches to cortical processing of speech and language in the human brain to better understand how it functions. Specific experiments demonstrate feasibility for the discrimination of small sets of words (83% on 10 spoken words) and semantic categories (76% on 2 categories). This speech and language information is broadly distributed both spatially and temporally across the brain.
Index Terms: speech recognition, semantics, machine learning, brain, magnetoencephalography, electroencephalography, support vector machines