Summary of Curry-dpo: Enhancing Alignment Using Curriculum Learning & Ranked Preferences, by Pulkit Pattnaik and Rishabh Maheshwary and Kelechi Ogueji and Vikas Yadav and Sathwik Tejaswi Madhusudhan
Curry-DPO: Enhancing Alignment using Curriculum Learning & Ranked Preferences
by Pulkit Pattnaik, Rishabh Maheshwary, Kelechi Ogueji, Vikas Yadav, Sathwik Tejaswi Madhusudhan
First submitted to arxiv on: 12 Mar 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Direct Preference Optimization (DPO) is a technique that utilizes pairwise preference data to align Large Language Models (LLMs) with human preferences. The authors propose utilizing multiple responses for a given prompt, along with quality ratings, to create multiple preference pairs. They use curriculum learning methodology to train DPO with the constructed preference pairs, ordering them from easy to hard according to various criteria. The proposed approach, Curry-DPO, shows increased performance gains on MTbench, Vicuna, WizardLM, and UltraFeedback test sets, outperforming existing LLMs with similar parameter size. Specifically, Curry-DPO achieves a score of 7.43 on MT-bench with Zephy-7B model and highest adjusted win rates on Vicuna, WizardLM, and UltraFeedback test datasets (90.7%, 87.1%, and 87.9% respectively). The authors release the preference pairs used in alignment at https://huggingface.co/datasets/ServiceNow-AI/Curriculum_DPO_preferences. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper talks about a way to make computers understand what humans like or dislike. It’s called Direct Preference Optimization (DPO). Instead of just looking at one response for a question, the authors suggest looking at multiple responses and how good each one is compared to others. This helps the computer learn what people prefer better. They tested this idea and found that it worked really well on some big datasets. The new way of doing things is called Curry-DPO. It’s like a special recipe for making computers smarter. |
Keywords
* Artificial intelligence * Alignment * Curriculum learning * Optimization * Prompt