ISCA Archive Clarity 2025
ISCA Archive Clarity 2025

Towards individualized models of hearing-impaired speech perception

Mark R. Saddler, Torsten Dau, Josh H. McDermott
Computational models that predict the real-world hearing abilities of individuals with hearing loss have the potential to transform hearing aid development. Deep artificial neural networks trained to perform ecological hearing tasks using simulated cochlear input reproduce many aspects of normal hearing, but it is not clear whether such models can also account for impaired hearing. We used the Clarity Prediction Challenge dataset to test whether a model jointly optimized for everyday sound localization and recognition tasks can predict the speech intelligibility of hearing-impaired listeners. We used the model’s learned feature representations as an intrusive speech intelligibility metric (predicting intelligibility from the similarity of representations of clean and distorted speech) and measured the effects of simulating individual listeners’ hearing losses in the model’s peripheral input. Individualizing the hearing loss simulations allowed our model to better predict speech intelligibility differences across listeners. However, this benefit was small when quantified via the overall human-model correlation, likely because the explainable variance in the dataset is driven more by the different hearing aids than by the different listeners.