Sound event detection is the task of recognizing sounds and determining their extent (onset/offset times) within an audio clip. Existing systems commonly predict sound presence posteriors in short time frames. Then, thresholding produces binary frame-level presence decisions, with the extent of individual events determined by merging presence in consecutive frames. In this paper, we show that frame-level thresholding deteriorates event extent prediction by coupling it with the system’s sound presence confidence. We propose to decouple the prediction of event extent and confidence by introducing sound event bounding boxes (SEBBs), which format each sound event prediction as a combination of a class type, extent, and overall confidence. We also propose a change-detection-based algorithm to convert frame-level posteriors into SEBBs. We find the algorithm significantly improves the performance of DCASE 2023 Challenge systems, boosting the state of the art from .644 to .686 PSDS1.