Loading Now

Summary of Bayes Conditional Distribution Estimation For Knowledge Distillation Based on Conditional Mutual Information, by Linfeng Ye et al.


Bayes Conditional Distribution Estimation for Knowledge Distillation Based on Conditional Mutual Information

by Linfeng Ye, Shayan Mohajer Hamidi, Renhao Tan, En-Hui Yang

First submitted to arxiv on: 16 Jan 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computer Vision and Pattern Recognition (cs.CV); Information Theory (cs.IT)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper introduces a novel approach to knowledge distillation (KD) by incorporating conditional mutual information (CMI) into the estimation of Bayes conditional probability distribution (BCPD). The proposed maximum CMI (MCMI) method simultaneously maximizes log-likelihood and CMI when training the teacher. This leads to improved contextual understanding in image clusters, as shown through Eigen-CAM analysis. Experimental results demonstrate that using an MCMI-trained teacher in various KD frameworks significantly boosts student classification accuracy, with gains of up to 3.32%. The paper also shows that this improvement is more pronounced in zero-shot and few-shot settings.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper presents a new way to improve knowledge distillation by incorporating conditional mutual information into the estimation of Bayes conditional probability distribution. This new method helps teachers provide better estimates for students, which leads to improved accuracy in classification tasks. The results show that this approach can lead to significant gains in accuracy, especially when there is limited training data available.

Keywords

* Artificial intelligence  * Classification  * Few shot  * Knowledge distillation  * Log likelihood  * Probability  * Zero shot