Target speaker extraction (TSE) using deep learning offers potential benefits for hearing-impaired listeners. However, their implementation in hearing aids requires low-latency, low-complexity algorithms capable of real-time operation. Existing models that comply with these requirements use multi-channel input from binaural hearing aids to improve speech intelligibility but are limited to extracting speech from a single fixed direction. In this work, we utilize the direction of arrival (DOA) of the target speaker to extract speech at arbitrary angles in complex acoustic environments based on a deep-learning model. We introduce a novel DOA encoding method based on complex exponentials which is compared to one-hot (oh) encoding. We explore three low-complexity methods for integrating DOA information into the model. The evaluation using objective measures demonstrates that our extended model outperforms the baseline system with our novel encoding method achieving superior performance in 16 out of 21 cases.