Speech is a valuable marker of disease onset and progression in amyotrophic lateral sclerosis (ALS). Acoustic and kinematic data have characterized speech impairments in ALS previously, and there is growing interest in combining these modalities in novel analytical platforms. We explored the use of a multimodal (audio/video) speech assessment pipeline in ALS patients with varying severities. Participants performed a passage reading task, and clinical outcomes of e.g., speech function were collected. Speech data were analyzed using a custom automated acoustic and kinematic pipeline. Sparse canonical correlation analysis (SCCA) was then used. Both acoustic and kinematic features loaded strongly with clinical data (loadings ≥|0.50|), indicating that multimodal features captured complementary speech function information. This reinforces the value of multimodal assessment techniques and points the way towards future remote assessment development steps.