Recently, fine-tuning the pre-trained large-scale Transformer models in lung sound classification tasks has yielded remarkable outcomes. However, the predominant method for fine-tuning is still full fine-tuning, which entails updating all parameters of large-scale models during training. Given the recent advancements in large-scale models, this approach requires significant computational resources and time. To tackle this issue, we introduce an efficient fine-tuning approach based on Adapter tuning, namely LungAdapter. This method can incorporate trainable blocks into a pre-trained audio Transformer model, allowing extraction of crucial information on lung sound classification from the model, while preserving the frozen parameters of large-scale pre-trained models. Experiments have shown that our method achieves performance comparable to or even superior to full fine-tuning while optimizing only 2.83% of the parameters.