This paper describes the University of New South Wales system for the Interspeech 2013 ComParE emotion sub-challenge. The primary aim of the submission is to explore the performance of model based variability compensation techniques applied to emotion classification and as a consequence of being a part of a challenge, to enable a comparison of these methods to alternative approaches. In keeping with this focused aim, a simple frame based front-end of MFCC and ĢMFCC is utilised. The systems outlined in this paper consists of a joint factor analysis based system and one based on a library of speaker-specific emotion models along with a basic GMM based system. The best combined system has an accuracy (UAR) of 47.8% as evaluated on the challenge development set and 35.7% as evaluated on the test set.