Summary of Finetuning Language Models to Emit Linguistic Expressions Of Uncertainty, by Arslan Chaudhry et al.
Finetuning Language Models to Emit Linguistic Expressions of Uncertainty
by Arslan Chaudhry, Sridhar Thiagarajan, Dilan Gorur
First submitted to arxiv on: 18 Sep 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Large language models (LLMs) have gained popularity for information-seeking and decision-making tasks. However, these models often generate incorrect information that can be presented in a convincing manner, making it challenging for users to determine the accuracy of their predictions. To address this issue, we propose a method that utilizes supervised finetuning on uncertainty-augmented predictions to develop LLMs that express linguistic uncertainty. Our approach involves measuring the calibration of pre-trained models and then fine-tuning them to generate calibrated expressions of uncertainty. Experimental results on various question-answering datasets demonstrate that our approach leads to well-calibrated language models, particularly for single-claim answers. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Large language models are very smart computers that can help us find information or make decisions. Sometimes, these models might give us wrong information and make it sound like they’re really sure about it. This makes it hard for people to figure out if the model is right or not. To fix this problem, we developed a new way to train language models so they say when they’re not sure. We tested our method on different tasks and found that it helps language models become more honest about their uncertainties. |
Keywords
» Artificial intelligence » Fine tuning » Question answering » Supervised