Functional magnetic resonance imaging (fMRI) was used to investigate the brain activity underlying audio-visual speech perception in normally hearing and congenitally deaf individuals. Data were collected while subjects experienced three different types of speech stimuli: audio stimuli without visual input, video of a speaking face without audio input, and video of a speaking face with audio input. A control condition consisted of viewing a blank screen. The stimuli were vowels or CVCV syllables, presented in different blocks.
Brain regions that were involved in the lipreading process were identified for both normally hearing and congenitally deaf subject groups. Differences in brain activation patterns across these two subject groups were studied. Active brain regions for normal hearing subjects during the visual-only condition included visual cortex, angular gyrus, fusiform gyrus, and auditory cortex, as well as premotor areas in the frontal cortex. The pattern of activation found for deaf subjects while viewing visual-only stimuli was similar to that of normal hearing subjects, but showed distinctly more activity in the right hemisphere (for both vowels and CVCVs), and far less activity in premotor and parietal cortex. Interestingly, the pattern of activity for deaf subjects in the visual-only case was found to be similar to normal-hearing subject activation in the audio-visual case. Additionally, for deaf subjects, inferior cerebellum was active, particularly in right hemisphere, but not in normal hearing subjects.
Finally, effective connectivity analyses (structural equation modeling and dynamic causal modeling) were performed to investigate neural connectivity between brain regions in the different conditions for both groups of subjects. Preliminary results from both structural equation modeling and dynamic causal modeling analyses suggest that a visual cortex => fusiform gyrus => auditory cortex pathway may be the primary projection in which visual speech is processed to create an auditory percept. [Research Supported by NIDCD.]