The vulnerability of automatic speaker recognition systems to imposture or spoofing is widely acknowledged. This paper shows that extremely high false alarm rates can be provoked by simple spoofing attacks with artificial, non-speech-like signals and highlights the need for spoofing countermeasures. We show that two new, but trivial countermeasures based on higher-level, dynamic features and voice quality assessment offer varying degrees of protection and that further work is needed to develop more robust spoofing countermeasure mechanisms. Finally, we show that certain classifiers are inherently more robust to such attacks than others which strengthens the case for fused-system approaches to automatic speaker recognition.
Index Terms: automatic speaker verification, biometrics, spoofing, imposture, countermeasures