Speech super-resolution/Bandwidth Extension (BWE) can improve downstream tasks like Automatic Speaker Verification (ASV). We introduce a simple novel technique called Self-FiLM to inject self-supervision information in existing BWE models via Feature-wise Linear Modulation. We hypothesize that such information contains domain/environment information, which can help BWE deliver zero-shot generalization. Self-FiLM improves conditional GAN-based BWE by 18% (relative) in Equal Error Rate and 8.5% in minimum Decision Cost Function on the x-vector & Probabilistic Linear Discriminant Analysis based state-of-the-art ASV system on SRE21 test. We further improve it by using deep feature losses from time-domain models and re-training data2vec 2.0 models on naturalistic wideband (VoxCeleb) and telephone data (SRE Superset etc.). Lastly, we integrate Self-FiLM in CycleGAN to obtain a completely unsupervised solution that matches the CGAN-based semi-supervised performance.