Speech rate is an essential element for prosodic analysis. However, its acoustic measurement for large-scale applications can be hard if attempted by manual means. Therefore, automatic computation of speech rate is a good alternative provided that a reliable enough performance is guaranteed. In this light, the assessment of the performance of speech rate estimation tools has been attempted using several metrics, being correlation coefficient one of the most widely used. Nevertheless, it is not clear to what extent these methods offer a reliable enough measurement of the performance of these automatic systems. To address this issue, the current paper reviews the different evaluation methods that have been used according to the literature to assess the automatic computation of speech rate, and tests them on a corpus of read and spontaneous speech in Spanish. The obtained results show that error-based metrics are more robust and appropriate than correlation coefficients. Based on the empirical results, the current study concludes with a proposal of standard measures for evaluating automatic speech rate computation.