An attempt was made to clarify the effectiveness of crowdsourcing on reducing errors in automatic speaker identification (ASID). It is possible to efficiently reduce errors by manually revalidating the unreliable results given by ASID systems. Ideally, errors should be corrected appropriately, and correct answers should not be miscorrected. In addition, a low false acceptance rate is desirable in authentication, but a high false rejection rate should be avoided from a usability viewpoint. It, however, is not certain that humans can achieve such an ideal SID, and in the case of crowdsourcing, the existence of malicious workers cannot be ignored. This study, therefore, investigates whether manual verification of error-prone inputs by crowd workers can reduce ASID errors and whether the resulting corrections are ideal. Experimental investigations on Amazon Mechanical Turk, in which 426 qualified workers identified 256 speech pairs from VoxCeleb data, demonstrated that crowdsourced verification can significantly reduce the number of false acceptances without increasing the number of false rejections compared to the results from the ASID system.