Analysis of our own experimental data and literature on morpho-functional organization of acoustic communication system in humans in comparison with animals suggests the unity of its separate substrata functioning, depending on the type of sensory-motor coordination of the final effectory reactions as well as on the complexity of the task presented. The procedure was developed to test this hypothesis, aiming at simultaneous perception of auditory stimuli monaurally presented and generation of vocal signals of the two types: a), precise imitation of the sound presented; b). recognition of complex sounds against spontaneous imitation of speech sounds -vowels, consonants or frequency modulated voice. The results show that precise imitation of familiar vowels of the native language is a nonlateralized function whereas precise imitation of unfamiliar vowels of a foreign language is the left-lateralized function demanding considerably more time for its realization than the first one. Recognition of artificial complex sounds depends on the quality of self-generated sounds.The data demonstrate an evidence of left hemisphere lateralization in rigid interaction of articulation program and acoustical recognition system, while vocalization-auditory interaction seems to be symmetrical.
Keywords: speech-hearing interaction, lateralization