How language-specific are speech representations learned by self-supervised models? Existing work has shown that a range of linguistic features can be successfully decoded from end-to-end models trained only on speech recordings. However, it's less clear to what extent pre-training on specific languages improves language-specific linguistic information. Here we test the encoding of Dutch phonetic and lexical information in internal representations of self-supervised Wav2Vec2 models. Pretraining exclusively on Dutch improves the representation of Dutch linguistic features as compared to pre-training on similar amounts of English or larger amounts of multilingual data. This language-specific advantage is well-detected by trained clustering or classification probes, and partially observable using zero-shot metrics. Furthermore, the language-specific benefit on linguistic feature encoding aligns with downstream performance on Automatic Speech Recognition.
Correction to Section 2 (Models): This archival publication contains misreported numbers on the CommonVoice data included in pre-training the Wav2Vec2-NL model. The correct number of hours sampled from CommonVoice is 83, making the total number of hours in Wav2Vec2-NL pre-training 831. Across the full training set, audio samples ranged between 2 and 20 seconds in length.