Loading Now

Summary of More Is More: Addition Bias in Large Language Models, by Luca Santagata et al.


More is More: Addition Bias in Large Language Models

by Luca Santagata, Cristiano De Nobili

First submitted to arxiv on: 4 Sep 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Computers and Society (cs.CY); Human-Computer Interaction (cs.HC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper investigates whether Large Language Models (LLMs) exhibit an additive bias, similar to humans who tend to prefer making additions rather than subtractions. The authors tested various LLMs on tasks designed to measure their propensity for additive versus subtractive modifications. They found that all the models they tested showed a significant preference for adding changes over removing them. For example, in a palindrome creation task, one model added letters 97.85% of the time, while in a text summarization task, another model produced longer summaries 59.40% to 75.10% of the time when asked to improve its own or others’ writing. The authors suggest that this additive bias might have implications for the large-scale use of LLMs and could increase resource use, environmental impact, and economic costs due to overconsumption and waste.
Low GrooveSquid.com (original content) Low Difficulty Summary
LLMs are special kinds of computers that can understand and generate human-like language. This paper looks at how these machines make changes when they’re asked to do something with text. It seems that the LLMs like to add things more often than remove them, just like humans do! The authors did some tests and found out that many different LLMs do this, including popular ones like GPT-3.5 Turbo and Mistral. They think that this might be a problem because it could mean that these machines use up too many resources and make a big environmental impact.

Keywords

» Artificial intelligence  » Gpt  » Summarization