This paper addresses the problemof speaker-based segmentation. The aim is to segment the audio data with respect to the speakers. In our study, we assume that no prior information on speakers is available and that people do not speak simultaneously. Our segmentation technique is operated in two passes: first, the most likely speaker changes are detected and then, they are validated or discarded during the second pass. The practical significance of this study is illustrated by applying our technique to synthesized and real data to show its efficiency and to compare its performances with another segmentation technique.