Understanding the neural foundations of speech production is the cornerstone for advancing both theoretical knowledge and practical applications in neuroscience and speech technology. This study analyzes magnetoencephalography recordings to explore the classification of phonetic units (phones) from the brain during speech production. We employ machine learning techniques to decode phones in pairs from a dataset involving subjects performing speech perception and production tasks. Our findings indicate a superior decoding accuracy during speech production compared to listening. In fact, speech production is a modality much less explored in neuroscience due to its inherent complexities. This research not only deepens our understanding of speech processing in the brain but also underscores the critical need to investigate speech production, which, despite its challenges, holds the key to developing real-life applications such as improved brain-computer interfaces for communication aids.