Most of the Neural-, Markov- or Hybrid-(Neural/Markov) models for continuous speech recognition or isolated word recognition have fixed architecture during training. A possible consequence is non-optimal and often too small or too large models. At ICASSP92 we proposed a Self-structuring Hidden Control (SHC) neural model [1] for isolated word recognition. These self-structuring models can generate near-optimal model architectures during training. In this paper we extend this work on isolated word recognition and describe how SHC models can be integrated in a continuous speech recognition system. Context-Independent SHC models can model isolated words or phones efficiently and Context-Dependent SHC models can model triphones efficiently. The results presented in this paper show that CI-SHC models and CD-SHC models are potential alternatives to traditional pattern models. Furthermore an analysis of the self-structuring capabilities of SHC models is presented by showing the relation between pattern complexity and model complexity.