ISCA Archive Interspeech 2024
ISCA Archive Interspeech 2024

Collaborative Contrastive Learning for Hypothesis Domain Adaptation

Jen-Tzung Chien, I-Ping Yeh, Man-Wai Mak

Achieving desirable performance for speaker recognition with severe domain mismatch is challenging. Such a challenge becomes even more harsh when the source data are missing. To enhance the low-resource speaker representation, this study deals with a practical scenario, called hypothesis domain adaptation, where a model trained on a source domain is adapted to a significantly different target domain as a hypothesis without access to source data. To pursue a domain-invariant representation, this paper proposes a novel collaborative hypothesis domain adaptation (CHDA) where the dual encoders are collaboratively trained to estimate the pseudo source data which are then utilized to maximize the domain confusion. Combined with the constrastive learning, this CHDA is further enhanced by increasing the domain matching as well as the speaker discrimination. The experiments on cross-language speaker recognition show the merit of the proposed method.