Summary of Fundamental Problems with Model Editing: How Should Rational Belief Revision Work in Llms?, by Peter Hase et al.
Fundamental Problems With Model Editing: How Should Rational Belief Revision Work in LLMs?
by Peter Hase, Thomas Hofweber, Xiang Zhou, Elias Stengel-Eskin, Mohit Bansal
First submitted to arxiv on: 27 Jun 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary In this paper, researchers investigate the concept of model editing, which is crucial for learning new facts about the world over time. The problem is rooted in belief revision, a long-standing challenge in philosophy that has eluded solutions for decades. The authors critique the standard formulation of the model editing problem and propose a formal testbed for research. They identify 12 open problems with model editing, including challenges related to defining the problem, developing benchmarks, and assuming language models have editable beliefs. To address these issues, the researchers introduce a semi-synthetic dataset based on Wikidata, enabling evaluation of edits against an idealized Bayesian agent’s labels. This allows for comparison between belief revision in language models and a desirable epistemic standard. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about how we can teach computers new things over time. Right now, we don’t have a good way to do this because our computer programs are too good at remembering old information. We need to find a way to make them learn new facts while still keeping the old ones accurate. The researchers in this paper look at why it’s hard to edit what computers know and propose a testbed for testing how well they can learn new things. |