Loading Now

Summary of Time Sensitive Knowledge Editing Through Efficient Finetuning, by Xiou Ge et al.


Time Sensitive Knowledge Editing through Efficient Finetuning

by Xiou Ge, Ali Mousavi, Edouard Grave, Armand Joulin, Kun Qian, Benjamin Han, Mostafa Arefiyan, Yunyao Li

First submitted to arxiv on: 6 Jun 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes an innovative approach to updating and expanding Large Language Models (LLMs) with Parameter-Efficient Fine-Tuning (PEFT) techniques. The current methods, such as locate-and-edit, suffer from limitations like poor performance on complex queries and long processing times, making them impractical for large-scale knowledge editing. To overcome these challenges, the authors curate a comprehensive temporal dataset for benchmarking KE performance and investigate the impact of fine-tuning on various layers in an LLM for multi-hop question answering tasks. The results show that PEFT outperforms locate-and-edit techniques for time-sensitive knowledge updates.
Low GrooveSquid.com (original content) Low Difficulty Summary
Large Language Models are super smart computers that can understand and generate human-like text. But, it’s hard to keep them up-to-date once they’re trained. This paper finds a new way to update these models using something called Parameter-Efficient Fine-Tuning (PEFT). The current method, locate-and-edit, has some big problems – it doesn’t do well on tricky questions and takes too long to work. To fix this, the authors created a special dataset to test how good PEFT is at updating language models. They also looked at how fine-tuning different parts of the model affects its ability to answer hard questions. The results show that PEFT is better than locate-and-edit for making sure these language models are accurate and up-to-date.

Keywords

* Artificial intelligence  * Fine tuning  * Parameter efficient  * Question answering