This paper describes work on the development of an HMM-based system for automatic speech assessment, particularly of dysarthric speech. As a first step, we compare recognizer performance on a closed-set, forced choice identification test of dysarthric speech with performance on the same test by untrained listeners. Results indicate that HMM recognition accuracy averaged over all utterances of a dysarthric talker is well-correlated with measures of overall talker intelligibility. However, on an utterance-by-utterance basis, the pattern of errors obtained from the human subjects and the machine, while significantly correlated, accounts for, at best, only about 25 percent of the variance. Potential methods for improving this performance are considered.