Summary of What Evidence Do Language Models Find Convincing?, by Alexander Wan et al.
What Evidence Do Language Models Find Convincing?
by Alexander Wan, Eric Wallace, Dan Klein
First submitted to arxiv on: 19 Feb 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper investigates how large language models (LLMs) respond to ambiguous queries that require evaluating conflicting evidence. The researchers construct ConflictingQA, a dataset pairing controversial queries with real-world documents containing different facts, argument styles, and answers. They then use this dataset to analyze how text features affect LLM predictions. The results show that current models prioritize website relevance over stylistic features important for human judgement, such as referencing scientific studies or writing in a neutral tone. This highlights the need for high-quality datasets and possibly retraining LLMs to better align with human judgements. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper looks at how big language models answer tricky questions that involve checking different facts and opinions. The scientists made a special dataset called ConflictingQA, which pairs tricky questions with real-life documents that have different information, ways of writing, and answers. They used this dataset to see what makes the language models’ predictions change. The results show that the models mostly look at how relevant the website is to the question, but don’t pay much attention to things humans care about, like whether the text has good references or is written in a calm tone. This shows why we need really good datasets and maybe even teach the language models differently so they make better decisions. |
Keywords
* Artificial intelligence * Attention