The significance of an objective metric for evaluating synthetic speech lies in its ability to provide a quantitative measure for systematic assessment of speech quality. However, previous works have focused on predicting subjective quality scores in a supervised manner, requiring a large amount of paired data comprising speech and perceived quality scores. In this work, we introduce a novel metric, the UNIQUE score, that integrates the concept of anomaly detection to systematically evaluate input speech in an unsupervised manner. By leveraging speech features from a self-supervised model, the system can learn a sophisticated speech distribution that enables it to detect differences between real and synthesized speech. By comparing the UNIQUE score of synthetic speech across various text-to-speech models and datasets with other objective measures, we demonstrate that our metric provides an effective evaluation of speech quality that shows a higher correlation with human perceptions.