We introduce a challenging and under-explored task -- Machine Unlearning for Speech Tasks. Machine unlearning aims to remove influence of a subset of data from an already trained model. It has practical applications of removing private or sensitive information, and updating outdated knowledge. We propose two unlearning tasks: sample unlearning and class unlearning, and discuss their challenges. In addition, we propose to use subset performances and membership inference attack as evaluation metrics. Experiments on two different speech tasks -- keyword spotting and speaker identification -- demonstrate that unlearning on speech data is more challenging than unlearning on image and text data, which receives more attention. We discuss future steps to advance machine unlearning on speech data, including curriculum learning.