In spoken language systems, the segmentation of utter- ances into coherent linguistic/semantic units is very use- ful, as it makes easier processing after the speech recog- nition phase. In this paper, a methodology for semantic boundary prediction is presented and tested on a corpus of person-to-person dialogues. The approach is based on bi- nary decision trees and uses text context, including broad classes of silent pauses, filled pauses and human noises. Best results give more than 90% precision, almost 80% recall and about 3% false alarms.