ISCA Archive SLaTE 2009
ISCA Archive SLaTE 2009

Analysis and comparison of automatic language proficiency assessment between shadowed sentences and read sentences

Dean Luo, Nobuaki Minematsu, Yutaka Yamauchi, Keikichi Hirose

In this paper, we investigate automatic language proficiency assessment from learners’ utterances generated through shadowing and reading aloud. By increasing the degrees of difficulty of learners’ tasks for each practice, we examine how the automatic scores, the conventional GOP and proposed F-GOP, change according to the cognitive loads posed on learners. We also investigate the effect and side-effect of MLLR (Maximum Likelihood Linear Regression) adaptation on shadowing and reading aloud. Experimental results show that shadowing can better reflect the learners’ true proficiency than reading aloud. Global MLLR adaptation can improve the evaluation performances on reading aloud more significantly than shadowing. But the performance is still better in shadowing. Finally we show that, by selecting native utterances of adequate semantic difficulty, the evaluation performance by shadowing is even improved.