Loading Now

Summary of Learning to Project For Cross-task Knowledge Distillation, by Dylan Auty et al.


Learning to Project for Cross-Task Knowledge Distillation

by Dylan Auty, Roy Miles, Benedikt Kolbeinsson, Krystian Mikolajczyk

First submitted to arxiv on: 21 Mar 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed modification to traditional knowledge distillation (KD) involves the use of an inverted projection, which enables the extension of many KD methods to the cross-task setting. This simple drop-in replacement for a standard projector learns to disregard task-specific features that might degrade student performance. The method achieves up to 1.9% improvement in the cross-task setting at no additional cost. Even randomly-initialized teachers can be used on tasks like depth estimation, image translation, and semantic segmentation, resulting in significant performance improvements of up to 7%. The paper also provides analytical insights into this result, decomposing the distillation loss into knowledge transfer and spectral regularisation components. A novel regularisation loss is proposed for teacher-free distillation, enabling performance improvements of up to 8.57% on ImageNet with no additional training costs.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper proposes a new way to improve traditional knowledge distillation. Instead of using a teacher model trained on the same task as the student, it suggests using any teacher model trained on a different task. This is called cross-task distillation. The authors show that by making a simple change to the standard projector, many KD methods can be used in this new way. They find that this method works well and can even use random teachers for certain tasks. This could be useful when there isn’t enough data or resources to train a teacher model.

Keywords

» Artificial intelligence  » Depth estimation  » Distillation  » Knowledge distillation  » Semantic segmentation  » Teacher model  » Translation