In this paper, we present a parallel computational model for the integration of speech and natural language processing (NLP). We have developed a parallel speech understanding system on the semantic network array processor (SNAP), a massively parallel computer developed at the University of Southern California. The parallel speech understanding algorithm is based on a memory-based parsing scheme. The key to the integration of speech and linguistic processing is the construction of a hierarchically structured knowledge base. The processing is carried out by passing markers parallelly through the knowledge base. Speech-specific problems like insertion, deletion, substitution, word boundary problems, and multiple hypotheses problems were analyzed and their parallel solutions were provided. The experimental results show that the processing time increases linearly with the length of target sentences, and increases logarithmically with the size of the knowledge base. This demonstrates that a massively parallel approach provides a viable platform for integrating speech and NLP on larger domains.