To improve the intelligibility of dysarthric patient speech, state-of-the-art work has focused on speaker-dependent voice conversion (VC) systems. Speaker-dependent systems are computationally expensive as they require training an individual model for a given speaker and often need many hours of speech data to perform well. Recording hours of speech data can be challenging for patients with dysarthria to provide. The present work, as part of a master’s thesis project, proposes to investigate speaker-independent approaches for improving dysarthric speech intelligibility. Objective evaluation of preliminary results demonstrate that speaker-independent VC has potential, with pretrained any-to-any models performing better than training a single many-to-many model from scratch.