This paper developed a new memory-augmented sequential learning based on a contrastive disentangled transformer. Conventionally, transformer is insufficient to characterize long sequences since the sequence length is restricted to avoid the requirement of overlarge memory. A direct solution to handle this issue is to divide long sequence into short segments, but the context fragmentation will happen. In this paper, the contrastive disentangled memory is exploited to deal with the increasing computation cost as well as the overlarge memory requirement due to long sequences. In particular, an informative selection over the disentangled memory slots is proposed for iterative updating in a large-span sequence representation. This paper maximizes the semantic diversity of memory slots and captures the contextual semantics via contrastive learning. The experiments on language understanding show that the context fragmentation is mitigated by the proposed method with reduced computation.