Summary of Conflictbank: a Benchmark For Evaluating the Influence Of Knowledge Conflicts in Llm, by Zhaochen Su et al.
ConflictBank: A Benchmark for Evaluating the Influence of Knowledge Conflicts in LLM
by Zhaochen Su, Jun Zhang, Xiaoye Qu, Tong Zhu, Yanshu Li, Jiashuo Sun, Juntao Li, Min Zhang, Yu Cheng
First submitted to arxiv on: 22 Aug 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper presents a comprehensive benchmark called ConflictBank to evaluate knowledge conflicts in large language models (LLMs). Despite their impressive advancements across various disciplines, LLMs are prone to hallucinations due to knowledge conflicts. The proposed benchmark assesses conflicts from three aspects: retrieved knowledge, encoded knowledge, and interplay between the two. The study analyzes four model families and twelve instances, creating over 7 million claim-evidence pairs and QA pairs using a novel construction framework. The findings highlight model scale, conflict causes, and types, providing insights for developing more reliable LLMs. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Large language models have come a long way in various fields, but they still struggle with knowledge conflicts. These conflicts can cause hallucinations, which are major issues that need to be addressed. This paper creates a new benchmark called ConflictBank to help researchers understand and solve these problems. The benchmark looks at three types of conflicts: when the model retrieves information, when it’s stored in its memory, and how these two types interact with each other. By analyzing four different models and 12 instances of them, the study finds out what causes these conflicts and provides ways to fix them. |