Loading Now

Summary of Perceptions Of Linguistic Uncertainty by Language Models and Humans, By Catarina G Belem et al.


Perceptions of Linguistic Uncertainty by Language Models and Humans

by Catarina G Belem, Markelle Kelly, Mark Steyvers, Sameer Singh, Padhraic Smyth

First submitted to arxiv on: 22 Jul 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Language models can quantify uncertainty expressions like “probably” or “highly unlikely” with varying degrees of accuracy, but they tend to be biased towards their prior knowledge. While humans generally agree on how to interpret these expressions, language models exhibit significant differences in responding to true and false statements. This paper investigates how 10 different language models map linguistic uncertainty expressions to numerical responses, finding that 7 out of 10 models can do so in a human-like manner. However, the models’ responses are systematically influenced by whether the statement is true or false, indicating bias towards their prior knowledge.
Low GrooveSquid.com (original content) Low Difficulty Summary
Language models try to understand what humans mean when they say “probably” or “highly unlikely”. Researchers tested how well language models could translate these phrases into numbers that make sense. They found that most models (7 out of 10) can do this in a way that’s similar to how humans do it. But, the models behave differently depending on whether what they’re saying is true or false. This means that language models might not always understand us as well as we think.

Keywords

* Artificial intelligence