Guided source separation (GSS) is a target-speaker extraction method that uses pre-computed speaker activities and blind source separation to perform front-end enhancement of overlapped speech signals. First proposed during the CHiME-5 challenge, it provided significant improvements over the delay-and-sum beamforming baseline. Despite its strengths, the method has seen limited adoption for meeting transcription benchmarks primarily due to its high computation time. In this paper, we describe our improved implementation of GSS that leverages the power of modern GPU-based pipelines, such as batched processing of frequencies and segments, to provide 300x speed-up over CPU-based inference. This allows us to perform detailed ablation studies over several parameters of the GSS algorithm -- context duration, number of channels, and noise class, to name a few. We provide reproducible pipelines for speaker-attributed transcription of popular meeting benchmarks: LibriCSS, AMI, and AliMeeting.