Self-sentiment provides direct feedback from users and is vital in accurately evaluating and improving the quality of dialogue systems. However, few studies focus on self-sentiment prediction, and the works on third-party sentiment prediction suffer from two problems when predicting self-sentiments: Self-sentiment annotations are labeled by the speakers themselves, leading to solid individual bias in annotations and a sub-optimal prediction; The hardness of collecting sufficient data with self-sentiment annotations limits the size of the data, resulting in the overlapping problem. This work hence proposes a novel meta-learning domain adversarial contrastive neural network (MetaDACNN) that extracts user-shared prior knowledge and learns user-specific classifiers to handle individual bias and to alleviate overfitting. Experimental results on two public datasets show that MetaDACNN improves the prediction performance and alleviates individual bias compared to state-of-the-art models.