Summary of Correcting Large Language Model Behavior Via Influence Function, by Han Zhang et al.
Correcting Large Language Model Behavior via Influence Function
by Han Zhang, Zhuo Zhang, Yi Zhang, Yuanzhao Zhai, Hanyang Peng, Yu Lei, Yue Yu, Hui Wang, Bin Liang, Lin Gui, Ruifeng Xu
First submitted to arxiv on: 21 Dec 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed approach, LANCET, aims to address the challenge of large language models (LLMs) deviating from contemporary human preferences and societal norms. Existing methodologies require costly human resources, whereas LANCET requires no human involvement. It consists of two phases: identifying training data that significantly impact undesirable model outputs using influence functions, and adjusting the model’s behavior based on these influence distributions with an Influence function-driven Bregman Optimization (IBO) technique. The results show that LANCET effectively and efficiently corrects inappropriate behaviors of LLMs, outperforming methods that rely on collecting human preferences, and enhances the interpretability of learning human preferences within LLMs. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Large Language Models can get outdated or incorrect because people’s preferences change over time. This makes it hard to keep them aligned with what humans want. Currently, we need a lot of human help to make sure these models stay correct. We developed a new way to fix this called LANCET. It works by finding the training data that causes bad behavior and then adjusting the model so it acts more like humans do. |
Keywords
» Artificial intelligence » Optimization