ISCA Archive Interspeech 2025
ISCA Archive Interspeech 2025

Universal Semantic Disentangled Privacy-preserving Speech Representation Learning

Biel Tura-Vecino, Subhadeep Maji, Aravind Varier, Antonio Bonafonte, Ivan Valles, Michael Owen, Constantinos Papayiannis, Leif Radel, Grant Strimel, Oluwaseyi Feyisetan, Roberto Barra-Chicote, Ariya Rastrow, Volker Leutnant, Trevor Wood

The use of human speech to train LLMs poses privacy concerns due to these models' ability to generate samples that closely resemble artifacts in the training data. We propose a speaker privacy-preserving representation learning method through the Universal Speech Codec (USC), a computationally efficient codec that disentangles speech into: (i) privacy-preserving semantically rich representations, capturing content and speech paralinguistics, and (ii) residual acoustic and speaker representations that enable high-fidelity reconstruction. Evaluations show that USC's semantic representation preserves content, prosody, and sentiment, while removing identifiable traits. Additionally, we present an evaluation methodology for measuring privacy-preserving properties. We compare USC against other speech codecs and demonstrate its effectiveness on privacy-preserving representation learning, showcasing the trade-offs between speaker anonymization and paralinguistics retention.1