ISCA Archive Interspeech 2022
ISCA Archive Interspeech 2022

Multimodal Persuasive Dialogue Corpus using Teleoperated Android

Seiya Kawano, Muteki Arioka, Akishige Yuguchi, Kenta Yamamoto, Koji Inoue, Tatsuya Kawahara, Satoshi Nakamura, Koichiro Yoshino

We collected a corpus of persuasive dialogues containing multimodal information for building a persuasive dialogue system that encourages users to change their behaviors using multimodal information. The corpus is constructed with an android robot that was remotely controlled by the WoZ method during user interactions with the system. We transcribed the collected speech and annotated dialogue act labels. We also extracted the facial features of the dialogue participants. Pre- and post-questionnaires identified the subjects' personality, their awareness of the target domain of persuasion, the changes in their awareness before/after the persuasion, and whether they agreed to the persuasion during the dialogues. In addition, we conducted a follow-up survey with each subject to investigate whether the persuasion actually affected their behavioral change. Moreover, we built linear classifiers that predict persuasion success to investigate effective features.