Summary of Multi-level Feature Distillation Of Joint Teachers Trained on Distinct Image Datasets, by Adrian Iordache et al.
Multi-Level Feature Distillation of Joint Teachers Trained on Distinct Image Datasets
by Adrian Iordache, Bogdan Alexe, Radu Tudor Ionescu
First submitted to arxiv on: 29 Oct 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A novel teacher-student framework is proposed to distill knowledge from multiple teachers trained on distinct datasets. Each teacher is first trained independently, then combined into a joint architecture that fuses features at multiple levels. The joint architecture is fine-tuned across all datasets, gathering generic information. A multi-level feature distillation procedure transfers this knowledge to student models for each dataset. Image classification experiments are conducted on seven benchmarks and action recognition experiments on three. To illustrate the approach’s power, student architectures match individual teacher architectures. To demonstrate flexibility, teachers with distinct architectures are combined. The proposed Multi-Level Feature Distillation (MLFD) significantly surpasses equivalent architectures trained individually or jointly. A comprehensive ablation study confirms each step is well-motivated. Code is publicly released at this URL. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary A new way to learn from many teachers is proposed. Each teacher learns on its own, then combines with other teachers to share knowledge. This helps a student model learn better too. The approach is tested on image and action recognition tasks across multiple datasets. It’s shown that combining teachers in this way can be more effective than training each one separately or together. The code for the method is publicly available. |
Keywords
» Artificial intelligence » Distillation » Image classification » Student model