Loading Now

Summary of Improve Knowledge Distillation Via Label Revision and Data Selection, by Weichao Lan et al.


Improve Knowledge Distillation via Label Revision and Data Selection

by Weichao Lan, Yiu-ming Cheung, Qing Xu, Buhua Liu, Zhikai Hu, Mengke Li, Zhenghua Chen

First submitted to arxiv on: 3 Apr 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a novel approach to knowledge distillation (KD) in model compression, addressing the issue of unreliable supervision from teacher models. The vanilla KD method uses soft labels from the teacher’s predictions, but this can lead to misguiding student models when the teacher makes errors. To tackle this problem, the authors introduce two techniques: Label Revision, which rectifies incorrect teacher predictions using ground truth, and Data Selection, which selects suitable training samples supervised by the teacher. These methods aim to reduce the impact of erroneous supervision on student model training. Experimental results demonstrate the effectiveness of the proposed approach, showing it can be combined with other distillation methods to improve performance.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about making artificial intelligence (AI) models more efficient and accurate. Currently, there’s a technique called knowledge distillation that helps transfer information from big AI models to smaller ones. However, this process isn’t perfect because the bigger model can sometimes make mistakes. The authors of this paper suggest two ways to fix this issue: one is to correct the mistakes made by the bigger model, and the other is to only use certain training data that the bigger model got right. They tested these ideas and found that they work well.

Keywords

* Artificial intelligence  * Distillation  * Knowledge distillation  * Model compression  * Student model  * Supervised