Approximately 60% of children with speech and language impairments
do not receive the intervention they need because their impairment
was missed by parents and professionals who lack specialized training.
Diagnoses of these disorders require a time-intensive battery of assessments,
and these are often only administered after parents, doctors, or teachers
show concern.
An automated test could enable more widespread screening for speech
and language impairments. To build classification models to distinguish
children with speech or language impairments from typically developing
children, we use acoustic features describing speech and pause events
in story retell tasks. We developed and evaluated our method using
two datasets. The smaller dataset contains many children with severe
speech or language impairments and few typically developing children.
The larger dataset contains primarily typically developing children.
In three out of five classification tasks, even after accounting for
age, gender, and dataset differences, our models achieve good discrimination
performance (AUC > 0.70).