The connection between Electroencephalography (EEG) signals and human voice has gained significant attention, with studies demonstrating the feasibility of speech synthesis from EEG data. However, EEG-based voice conversion (VC) remains largely unexplored. To address this, we present the first EEG-based zero-shot VC system that converts speech into a target speaker’s voice without prior target data. Our method integrates an EEG feature extraction module with an alignment module to map EEG features to speaker-specific voice features. By leveraging an innovative three-stage training strategy and a pre-trained VC model—trained solely on speech data—we achieve zero-shot conversion. Experiments on the Single-Word-Production Dutch-iBIDS dataset confirm the system’s ability to reliably convert speech to a target speaker’s voice. This work highlights the potential of EEG-based VC for advancing assistive communication and brain-computer interfaces. All demos are available in https://doi.org/10.5281/zenodo.15510829.