This paper proposes recurrent neural prediction models (RNPM) for speech recognition which are recurrent neural networks trained as a nonlinear predictor of speech signals. Among various recurrent architectures, two well-known recurrent neural networks are tested here. The RNPM does not require any time alignment algorithm, which allows considerable reduction of computation time in recognition phase. Experiments on Korean digit recognition have shown that the performance of RNPM is a little better than that of other predictive neural networks.