Loading Now

Summary of Plad: Preference-based Large Language Model Distillation with Pseudo-preference Pairs, by Rongzhi Zhang et al.


PLaD: Preference-based Large Language Model Distillation with Pseudo-Preference Pairs

by Rongzhi Zhang, Jiaming Shen, Tianqi Liu, Haorui Wang, Zhen Qin, Feng Han, Jialu Liu, Simon Baumgartner, Michael Bendersky, Chao Zhang

First submitted to arxiv on: 5 Jun 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The abstract presents a novel approach to distilling knowledge from Large Language Models (LLMs) to smaller models, called PLaD, which tackles challenges like access restrictions, capacity gaps, and mis-calibration. PLaD uses pseudo-preference pairs generated from the teacher-student capacity discrepancy to re-calibrate the student’s estimation of sequence likelihood. This allows the student model to focus on understanding output quality rather than simply imitating the teacher. The authors demonstrate the effectiveness of PLaD through extensive experiments on two sequence generation tasks and various LLMs.
Low GrooveSquid.com (original content) Low Difficulty Summary
Large Language Models (LLMs) are really smart at doing certain jobs, but they take up a lot of space and can’t be used everywhere. To help, scientists developed something called Knowledge Distillation (KD), which helps smaller models learn from the big ones. But there were some problems with this approach when using LLMs. The new method, PLaD, solves these issues by comparing the teacher model’s answers to its own and adjusting how well it does. This makes the smaller model focus on getting good answers instead of just copying what the big one says. By testing PLaD on different tasks and models, scientists showed that it really works.

Keywords

» Artificial intelligence  » Knowledge distillation  » Likelihood  » Student model  » Teacher model