Ordinate developed an automatic assessment of oral reading fluency that was administered to a large sample of American adults. Because fluent reading entails accuracy, the machine evaluations of oral reading accuracy were assessed. This paper reviews the methods and results of a study to assess accuracy and bias within a large-scale automatic assessment of oral reading fluency. An experiment compared machine scores with human ratings to measure accuracy and detect any bias for linguistic/ethnic groups. The individual data products of the machine scores are described and the validation experiment is presented. The machine scores were substantially identical to the human ratings.