This paper introduces a novel encoder architecture designed to enhance transducer-based dual-mode automatic speech recognition (ASR). Our approach leverages the selective state-space model, Mamba, to enable attention-free dual-mode ASR. While bidirectional Mamba (BiMamba) captures full context and enables constant-time inference, unlike attention-based models with quadratic complexity, it is limited to offline processing. In contrast, using only unidirectional Mamba for the dual-mode ASR degrades ASR performance in both offline and streaming modes due to its restricted access to future contexts. To address this issue, we propose the latency-controlled BiMamba (LC-BiMamba); it enables chunk-wise processing in streaming mode while still accessing future contexts and functioning as standard BiMamba does in offline mode. Experimental results demonstrate that LC-BiMamba outperforms the baseline Conformer system, achieving faster and more accurate decoding in our dual-mode ASR framework.