Laughter detection is important in the analysis of human communication. In recent years, machine learning has been commonly used to detect laughter. Laughter segmentation, the task of accurately identifying the location of laughter in audio, necessitates precise annotation of training data. However, manual annotation is very time-consuming and data preparation is not easy. We propose a method to facilitate the creation of training data for segmenting laughter in audio. Our method automatically annotates by synthesizing laughter and adding it to arbitrary audio. It allows a large amount of data to be created because the number and positions of laughs can be set freely, and data augmentation can be applied to the laughter separately. In addition, because our method can automatically annotate arbitrary audio, it can easily create datasets for training models on new data. Evaluation shows that our segmentation model outperforms existing models trained on manually annotated datasets.