As headphones and earbuds with integrated AI assistants become more prevalent, it is important to prevent other people from gaining unauthorized access to these devices and accessing personal, sensitive user information. One solution to this problem is to implement a speaker identification (SID) model that can register authorized users onto the headphones and verify their identity in real-time, but it is challenging for these models to work in loud and/or noisy conditions, and they can be easily hacked using voice cloning, spoofing, and other adversarial attack techniques. In this paper, we propose using speech data collected from in-ear and in-earcup microphones commonly found on noise-cancelling headphones and earbuds to fine-tune SID models to address these issues. We collected inside mic data from 195 speakers across 4 headphones and earbuds and show that fine-tuning the model on multiple devices can improve performance when evaluating on a single device.