Loading Now

Summary of Large Language Models Are Skeptics: False Negative Problem Of Input-conflicting Hallucination, by Jongyoon Song et al.


Large Language Models are Skeptics: False Negative Problem of Input-conflicting Hallucination

by Jongyoon Song, Sangwon Yu, Sungroh Yoon

First submitted to arxiv on: 20 Jun 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper introduces a new category of bias in large language models (LLMs), which they term the “false negative problem.” This issue arises when LLMs generate responses inconsistent with the input context, often returning negative judgments. The researchers conduct experiments involving pairs of statements with contradictory factual directions and find that LLMs exhibit a bias towards false negatives. Specifically, they observe greater overconfidence in responding with “False” when presented with contradictory information. The study also explores the relationship between the false negative problem and context and query rewriting, finding that both methods effectively address this issue.
Low GrooveSquid.com (original content) Low Difficulty Summary
Large language models (LLMs) are getting better at answering questions, but sometimes they get things wrong. This paper talks about a special kind of mistake called “false negatives.” It’s when LLMs are too sure they’re wrong, even when the answer is actually correct! The researchers did some experiments and found that LLMs tend to make this mistake more often than not. They also looked at ways to fix this problem and found that changing how we ask questions or provide context can help LLMs get it right more often.

Keywords

» Artificial intelligence