This work proposes a new architecture for deep neural network training. Instead of having one cascade of fully connected hidden layers between the input features and the target output, the new architecture organizes hidden layers into several regions with each region having its own target. Regions communicate with each other during the training process by connections among intermediate hidden layers to share learned internal representations from their respective targets. They do not have to share the same input features. This paper presents the performance of acoustic models built using this architecture with speaker independent and dependent features. Experimental results are compared with not only the baseline DNN model, but also the ensemble DNN, unfolded RNN and stacked DNN. Experiments on the IARPA sponsored Babel tasks demonstrate improvements ranging from 0.8% to 2.7% absolute reduction in WER.