This study introduces a novel real-time, gaze-directed audio-visual speech enhancement (AVSE) framework for hearing aids designed to improve speech intelligibility for individuals with hearing loss (pHL) in noisy environments. Existing gaze estimation methods often rely solely on eye angle, leading to reduced accuracy. Our approach addresses this limitation by combining eye angle and nose position with head pose estimation to enhance target speaker identification and facilitate noise reduction within the AVSE framework. We utilize a novel eye gaze estimation algorithm that leverages the listener’s nose position for improved accuracy. Head pose estimation is also used to capture the overall direction of attention. This combined information is utilized in real-time to steer a beamformer towards the target speaker, effectively enhancing their voice and suppressing background noise. Pilot trials with pHL users demonstrated high accuracy (99.55% - 99.88%) in estimating target speaker direction using the proposed algorithm. This research presents a promising approach for improving communication accessibility and social interaction for pIH users by potentially enhancing speech recognition in challenging listening situations. Future studies will quantify the improvement in speech intelligibility achieved by the gaze directed AVSE framework.