The speech representation learning approaches, for non-semantic tasks such as language recognition, have either explored supervised embedding extraction methods using a classifier model or self-supervised representation learning approach using raw data. In this paper, we propose a novel framework of combining the self-supervised representation learning with the language label information for the pre-training task. This framework, termed as Label Aware Speech Representation learning (LASR), uses a triplet based objective function to incorporate the language labels along with the self-supervised loss function. The language recognition experiments are performed on two public datasets - FLEURS and Dhwani. In these experiments, we illustrate that LASR framework improves over the SOTA systems in terms of recognition performance. We also report an analysis of the robustness of the LASR approach to noisy/missing labels as well as for ASR task.