doi: 10.21437/Clarity.2025
The 3rd Clarity Prediction Challenge: A machine learning challenge for hearing aid intelligibility prediction
Jon Barker, Michael A. Akeroyd, Trevor J. Cox, John F. Culling, Jennifer Firth, Simone Graetzer, Graham Naylor
Lightweight Speech Intelligibility Prediction with Spectro-Temporal Modulation for Hearing-Impaired Listeners
Xiajie Zhou, Candy Olivia Mawalim, Huy Quoc Nguyen, Masashi Unoki
Intrusive Intelligibility Prediction with ASR Encoders
Hanlin Yu, Haoshuai Zhou, Boxuan Cao, Changgeng Mo, Linkai Li, Shan Xiang Wang
Towards individualized models of hearing-impaired speech perception
Mark R. Saddler, Torsten Dau, Josh H. McDermott
Non-Intrusive Multi-Branch Speech Intelligibility Prediction using Multi-Stage Training
Ryandhimas E. Zezario, Szu-Wei Fu, Dyah A.M.G. Wisnu, Hsin-Min Wang, Yu Tsao
Domain-Adapted Automatic Speech Recognition with Deep Neural Networks for Enhanced Speech Intelligibility Prediction
Haeseung Jeon, Jiwoo Hong, Saeyeon Hong, Hosung Kang, Bona Kim, Se Eun Oh, Noori Kim
Non-Intrusive Speech Intelligibility Prediction Using Whisper ASR and Wavelet Scattering Embeddings for Hearing-Impaired Individuals
Rantu Buragohain, Jejariya Ajaybhai, Aashish Kumar Singh, Karan Nathwani, Sunil Kumar Kopparapu
Integrating Linguistic and Acoustic Cues for Machine Learning-Based Speech Intelligibility Prediction in Hearing Impairment
Candy Olivia Mawalim, Xiajie Zhou, Huy Quoc Nguyen, Masashi Unoki
OSQA-SI: A Lightweight Non-Intrusive Analysis Model for Speech Intelligibility Prediction
Hsing-Ting Chen, Po-Hsun Sung
Non-intrusive Speech Intelligibility Prediction Model for Hearing Aids using Multi-domain Fused Features
Guojian Lin, Fei Chen
Word-level intelligibility model for the third Clarity Prediction Challenge
Mark Huckvale
A Chorus of Whispers: Modeling Speech Intelligibility via Heterogeneous Whisper Decomposition
Longbin Jin, Donghun Min, Eun Yi Kim
Speech intelligibility prediction based on syllable tokenizer
Szymon Drgas
The Dawn of Psychoacoustic Reverse Correlation: A Data-Driven Methodology for Determining Fine Grained Perceptual Cues of Speech Clarity
Paige Tuttösí, H. Henny Yeung, Yue Wang, Jean-Julien Aucouturier, Angelica Lim
TF-MLPNet: Tiny Real-Time Neural Speech Separation
Malek Itani, Tuochao Chen, Shyamnath Gollakota
Controllable joint noise reduction and hearing loss compensation using a differentiable auditory model
Philippe Gonzalez, Torsten Dau, Tobias May
Say Who You Want to Hear: Leveraging TTS Style Embeddings for Text-Guided Speech Extraction
Akam Rahimi, Triantafyllos Afouras, Andrew Zisserman
| Article |
|---|