Summary of Understanding Likelihood Over-optimisation in Direct Alignment Algorithms, by Zhengyan Shi et al.
Understanding Likelihood Over-optimisation in Direct Alignment Algorithms
by Zhengyan Shi, Sander Land, Acyr Locatelli, Matthieu Geist, Max Bartolo
First submitted to arxiv on: 15 Oct 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper investigates Direct Alignment Algorithms (DAAs), which aim to align language models with human preferences without explicit reward modelling. DAAs like DPO and IPO increase the likelihood of generating preferred completions while discouraging non-preferred ones, staying close to the original model’s behavior. The study finds that higher likelihoods do not always lead to better performance and may even degrade it. Instead, a slightly lower completion likelihood improves output diversity, leading to better generalisation. Two indicators are identified: Decreasing Entropy over Top-k Tokens and Diminishing Top-k Probability Mass, which signal declining performance under different regularisations. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Direct Alignment Algorithms (DAAs) are special kinds of computer programs that help language models match what humans like or dislike. They try to make the model produce better answers by making it more likely to come up with good ones and less likely to come up with bad ones. The problem is that these algorithms can get too good at coming up with good answers, which actually makes them worse! This happens because they start to memorise patterns in factual knowledge instead of learning new things. To fix this issue, researchers have found two important signs: when the diversity of answers decreases and when it becomes less likely that an answer will be a good one. By knowing these signs, developers can prevent their algorithms from getting too good at coming up with good answers and create better models. |
Keywords
» Artificial intelligence » Alignment » Likelihood » Probability