The mixture of experts (MoE)-based automatic speech recognition (ASR) model can achieve remarkable performance, but pose greater challenges to model deployment for its huge model size. Therefore, it is important to compress the model size and reduce the computational cost. In this paper, we propose a compressed MoE (CMoE) ASR model that simplifies the MoE structure by knowledge distillation and reduces parameter bit-width through quantization, and provide two pipelines (one-stage and two-stage pipelines) to deploy the compression. In quantization, we use binary weight network to quantize the weights to 1-bit for reducing the quantization error and use learned step size quantization to quantize the activations to 4-bit. Experimental results show that the quantized dense network compressed from the MoE based ASR model by our method reduces the size by 150x with very small accuracy loss. The proposed model is expected to be deployed on embedded devices.