Spoken language models (SLMs) have gained increasing attention with advancements in text-based, decoder-only language models. SLMs process text and speech, enabling simultaneous speech understanding and generation. This paper presents SpinHuBERT and Double-Codebook Speaker-invariant Clustering (DC-Spin) to improve speech tokenization for bridging audio signals and SLM tokens. DC-Spin extracts speaker-invariant tokens rich in phonetic information and resilient to input variations, enhancing zero-shot SLM tasks and speech resynthesis. Comparisons of tokenization methods and downstream task proxies show that tokens easily modeled by an n-gram LM or aligned with phonemes offer strong performance, offering insights for designing speech tokenizers for SLMs.