The expansion of speech models emphasizes the importance of parameter efficiency in practical automatic speech recognition (ASR) systems. Parameter sharing, which reuses the same parameter multiple times, has emerged as a promising solution to reduce storage requirements. However, previous studies have often faced challenges in balancing the number of parameters with performance. In this paper, we propose a novel architecture that effectively reduces the number of parameters while minimizing performance degradation. The key idea is to insert a lightweight adapter module that adjusts the features generated by shared parameters, thereby enhancing the diversity of representations. We introduce a unique adapter module and parameter-sharing configuration tailored for Conformer-based ASR encoders. Experimental results demonstrate that the proposed architecture reduces approximately 50% of parameters and 20% of computations without compromising speech recognition performance.