ISCA Archive Interspeech 2025
ISCA Archive Interspeech 2025

Improving Linguistic Diversity of Large Language Models with Possibility Exploration Fine-Tuning

Long Mai, Julie Carson-Berndsen

While Large Language Models (LLMs) have made significant strides in replicating human-like abilities, there are concerns about a reduction in the linguistic diversity of their outputs. This results in the homogenization of viewpoints and perspectives, as well as the underrepresentation of specific demographic groups. Although several fine-tuning and prompting techniques have been suggested to tackle the issue, they are often tailored to specific tasks or come with a substantial increase in computational cost and latency. This makes them challenging to apply to applications that demand very low latency, such as spoken chatbots or virtual assistants. We propose Possibility Exploration Fine-Tuning (PoExFT), a task-agnostic framework that enhances response diversity of LLMs without increasing latency. Given the same prompt, models fine-tuned with PoExFT can simultaneously generate multiple diverse responses. Experiments on dialogue and story generation tasks show that PoExFT significantly enhances the diversity of LLMs, as evidenced by lower similarity between candidate responses. As PoExFT emphasizes semantic diversity over lexical diversity, it can also notably reduce demographic bias in dialogue systems.