In this study, we propose Dynamic Layer Fusion for EEND (DLF-EEND), a novel approach for integrating Transformer layer information in end-to-end speaker diarization. The model introduces an auxiliary branch during training, using dynamic routing to adaptively fuse multi-resolution representations at each time step. Applying Permutation-Invariant Training (PIT) loss to the fused features enhances hierarchical learning, from low-level acoustic cues to high-level speaker separation. This preserves distinct layer-specific information and improves diarization accuracy, particularly in overlapping speech and speaker transitions. During inference, only the main branch is used, reducing computation while maintaining inter-layer fusion benefits. Experiments show DLF-EEND reduces Diarization Error Rate (DER) by 59.18% on simulated datasets and improves CALLHOME performance by 14.26%, outperforming SA-EEND.