Loading Now

Summary of Model Editing by Standard Fine-tuning, By Govind Gangadhar et al.


Model Editing by Standard Fine-Tuning

by Govind Gangadhar, Karl Stratos

First submitted to arxiv on: 16 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper presents a novel approach to model editing using standard fine-tuning, which is typically considered less effective than specialized methods. The authors show that by making two minor adjustments – optimizing the conditional likelihood and incorporating random or similar unedited facts into training data – standard fine-tuning can achieve competitive results with highly specialized editors on ZsRE and CounterFact datasets.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper shows how to improve model editing using a simple and easy-to-use method. Normally, this approach doesn’t work well for editing models. But by making a couple of small changes, it can actually perform as well as more complex methods. The authors tested their idea on some big datasets and found that it works really well.

Keywords

* Artificial intelligence  * Fine tuning  * Likelihood