Summary of Joint Knowledge Editing For Information Enrichment and Probability Promotion, by Wenhang Shi et al.
Joint Knowledge Editing for Information Enrichment and Probability Promotion
by Wenhang Shi, Yiren Chen, Shuqing Bian, Xinyi Zhang, Zhe Zhao, Pengfei Hu, Wei Lu, Xiaoyong Du
First submitted to arxiv on: 22 Dec 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI); Information Retrieval (cs.IR)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper tackles the issue of updating knowledge in large language models to reflect real-world information dynamics. Most existing methods focus on lower layers, as probes suggest that answer information is enriched there. However, these probes only reveal critical recall stages for original answers, not target answers. To address this inconsistency, the authors propose a contrast-based probe approach and identify two crucial stages: Information Enrichment in low layers and Probability Promotion in high layers. Building on these insights, they develop the Joint knowledge Editing for information Enrichment and probability Promotion (JEEP) method, which jointly edits both low and high layers to modify these critical recall stages. JEEP is designed to ensure that updates to distinct regions share the same objectives and are complementary. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Knowledge stored in large language models needs regular updates to stay current with real-world information changes. The authors of this paper suggest a new way to update knowledge, called Joint knowledge Editing for information Enrichment and probability Promotion (JEEP). JEEP helps models learn from new information by editing both the low and high layers. This approach ensures that the model learns in a way that makes sense, rather than just copying old answers. |
Keywords
» Artificial intelligence » Probability » Recall