Recent studies on child-robot interaction (CRI) emphasize that children's speech behaviors towards robots are shaped not only by their beliefs about the robot but also by individual variations in how they perceive and build rapport with robots. Speech accommodation, characterized by adjusting speech features in response to the other talker, is a valuable indicator of child speech in CRI. While previous research mainly focused on simple interacting tasks, little is known about natural conversations between children and robots. In our study, fifty-five Mandarin-speaking children collaborated with the virtual robot Furhat to identify differences between pictures using spoken language. Keywords were recorded before and after the interaction. Acoustic analysis revealed significant reduction of differences in fundamental frequency, vowel duration, and vowel formants of the keywords between the child and the robot. Importantly, their accommodation demonstrated substantial individual variabilities, guided by the child's personality and slightly influenced by their perception of the robot's 'agreeableness,' interpreted as its degree of human-likeness. This is the first study investigating speech accommodation in natural conversations between Mandarin-speaking children and a social robot. It provides new evidence supporting a hybrid model combining automaticity and social motivations for interpreting accommodation in child communication, Mandarin speakers, and human-robot interaction.