In this work we present an adaptation method for personalized acoustic scene classification in ultra-low power embedded devices (EDs). The computational limitation of EDs and a large variety of acoustic scenes may lead to poor performance of the embedded classifier in specific real-world user environments. We propose a semi-supervised scheme that estimates the audio feature distribution at ED level and then samples this statistical model to generate artificial data points which emulate user-specific audio features. Then, a second, cloud-based classifier assigns pseudo labels to samples, which are merged with existing labeled data for retraining the embedded classifier. The proposed method leads to significant performance improvements on user-specific data sets and does neither require a persistent connection to a cloud service nor the transmission of raw audio or audio features. It thus results in low data rates, high utility, and privacy-preservation.