ISCA Archive Interspeech 2023
ISCA Archive Interspeech 2023

Unsupervised Auditory and Semantic Entrainment Models with Deep Neural Networks

Jay Kejriwal, Štefan Beňuš, Lina M. Rojas-Barahona

Speakers tend to engage in adaptive behavior, known as entrainment, when they become similar to their interlocutor in various aspects of speaking. We present an unsupervised deep learning framework that derives meaningful representation from textual features for developing semantic entrainment. We investigate the model's performance by extracting features using different variations of the BERT model (DistilBERT and XLM-RoBERTa) and Google's universal sentence encoder (USE) embeddings on two human-human (HH) corpora (The Fisher Corpus English Part 1, Columbia games corpus) and one human-machine (HM) corpus (Voice Assistant Conversation Corpus (VACC)). In addition to semantic features we also trained DNN-based models utilizing two auditory embeddings (TRIpLet Loss network (TRILL) vectors, Low-level descriptors (LLD) features) and two units of analysis (Inter pausal unit and Turn). The results show that semantic entrainment can be assessed with our model, that models can distinguish between HH and HM interactions and that the two units of analysis for extracting acoustic features provide comparable findings.


doi: 10.21437/Interspeech.2023-1929

Cite as: Kejriwal, J., Beňuš, Š., Rojas-Barahona, L.M. (2023) Unsupervised Auditory and Semantic Entrainment Models with Deep Neural Networks. Proc. INTERSPEECH 2023, 2628-2632, doi: 10.21437/Interspeech.2023-1929

@inproceedings{kejriwal23_interspeech,
  author={Jay Kejriwal and Štefan Beňuš and Lina M. Rojas-Barahona},
  title={{Unsupervised Auditory and Semantic Entrainment Models with Deep Neural Networks}},
  year=2023,
  booktitle={Proc. INTERSPEECH 2023},
  pages={2628--2632},
  doi={10.21437/Interspeech.2023-1929},
  issn={2308-457X}
}