The potential benefits of Computer-Assisted Language Learning (CALL) systems are often evaluated using resource-heavy pre- and post-tests. However, if a CALL system has an Automatic Speech Recognition (ASR) based scoring function, analyzing practice data from log files might provide sufficient information about user progress in pronunciation. In the current paper, we proposed measuring progress using practice data produced by users. We compared them with a traditional pre- and post-test method. We demonstrate that our automated approach found very similar trends in user progress to human judgment, suggesting its potential usability to simplify laborious pre- and post-test cycles. We analysed four ASR-based pronunciation metrics and extracted 106 acoustic features for all sentences. We observed significant improvements for learners using our CALL system both in terms of pronunciation and acoustic measures, comparable to the trends observed in our traditional pre- and post-test data.