Summary of Sinkhorn Distance Minimization For Knowledge Distillation, by Xiao Cui et al.
Sinkhorn Distance Minimization for Knowledge Distillation
by Xiao Cui, Yulei Qin, Yuting Gao, Enwei Zhang, Zihan Xu, Tong Wu, Ke Li, Xing Sun, Wengang Zhou, Houqiang Li
First submitted to arxiv on: 27 Feb 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Computation and Language (cs.CL)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed Sinkhorn Knowledge Distillation (SinKD) method addresses limitations in existing knowledge distillation (KD) techniques by leveraging the Sinkhorn distance to accurately assess the disparity between teacher and student distributions. The paper shows that current KL, RKL, and JS divergences suffer from mode-averaging, mode-collapsing, or mode-underestimation issues, hindering effective supervision in diverse NLP tasks. SinKD exploits the Sinkhorn distance’s nuanced and precise assessment of distribution disparity to improve logits-based KD for encoder-only, encoder-decoder, and decoder-only architectures on GLUE and SuperGLUE benchmarks. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Knowledge distillation (KD) helps compress large language models (LLMs). This paper finds problems with current methods like KL, RKL, and JS divergences. They make mistakes when teacher and student distributions don’t match well. The new Sinkhorn Knowledge Distillation method solves these issues by using the Sinkhorn distance to measure how different the teacher and student are. It’s better than other methods on many types of LLMs. |
Keywords
* Artificial intelligence * Decoder * Encoder * Encoder decoder * Knowledge distillation * Logits * Nlp