Extensive research has been conducted on CALL systems for Pronunciation Error detection to automate language improvement through self-evaluation. However, many of these previous approaches have relied on HMM or Neural Network Hybrid Models which, although have proven to be effective, often utilize phonetically labelled L2 speech data which is expensive and often scarce. This paper discusses a ”zero-shot” transfer learning approach to detect phonetic errors in L2 English speech by Japanese Native speakers using solely unaligned phonetically labelled Native Language speech. The proposed method introduces a simple base architecture which utilizes the XLSR-Wav2Vec2.0 model pre-trained on unlabelled multilingual speech. Phoneme mapping for each language is determined based on difference of articulation of similar phonemes. This method achieved a Phonetic Error Rate of 0.214 on erroneous L2 speech after fine-tuning on 70 hours of speech with low resource automated phonetic labelling, and proved to additionally model phonemes of the Native Language of the L2 speaker effectively without the need for L2 speech fine-tuning.