Traditional feature-based knowledge distillation aligns the student’s features with the teacher’s features. However, these one-to-one alignments overlook the structural relations between speakers in a mini-batch. Also, the large capacity gap between the two networks causes significant discrepancies between their features. To address these limitations, we propose distilling the inter- and intra-speaker relations. Instead of mimicking all pairwise relations between the student's and teacher's feature vectors, we propose Identifying and Distilling Informative Relations (IDIR), enabling the student network to acquire speakers' relationships from the teacher. Moreover, a margin is added to the similarity scores of the informative pairs, further reducing intra-speaker variances and increasing inter-speaker separations. Evaluations with a simple x-vector student network demonstrate the method's superb performance across three test sets, showcasing its merits and effectiveness.