An audio-visual intelligibility score is generally used as an evaluation measure in visual speech synthesis. Especially an intelligibility score of talking heads represents accuracy of facial models[1][2]. The facial models has two stages such as construction of real faces and realization of dynamical human-like motions. We focus on lip movement synthesis from input acoustic speech to realize dynamical motions. The goal of our researchis to synthesize lip movements natural enough to do lip-reading. In previous research, we have proposed a lip movement synthesis method using HMMs which can incorporate a forward coarticulation effect and confirmed its effectiveness through objective evaluation tests. In this paper, subjective evaluation tests are performed. Intelligibility test and acceptability test are conducted for subjective evaluation.