This paper summarizes our efforts in the NIST Language Recognition Evaluations 2022 resulting in systems providing competitive performance. We provide both the description and analysis of the systems. We describe what data we have used to train our models, and we follow with embedding extractors and backend classifiers. After covering the architecture, we concentrate on post-evaluation analysis. We compare different topologies of DNN, different backend classifiers, and the impact of the data used to train them. We also report results with XLS-R pre-trained models. We present the performance of the systems in the Fixed condition, where participants are required to use only predefined data sets, and also in the Open condition allowing to use any data to train the systems.