Summary of Distilling Invariant Representations with Dual Augmentation, by Nikolaos Giakoumoglou et al.
Distilling Invariant Representations with Dual Augmentationby Nikolaos Giakoumoglou, Tania StathakiFirst submitted to arxiv on: 12…
Distilling Invariant Representations with Dual Augmentationby Nikolaos Giakoumoglou, Tania StathakiFirst submitted to arxiv on: 12…
GAI-Enabled Explainable Personalized Federated Semi-Supervised Learningby Yubo Peng, Feibo Jiang, Li Dong, Kezhi Wang, Kun…
What is Left After Distillation? How Knowledge Transfer Impacts Fairness and Biasby Aida Mohammadshahi, Yani…
Efficient and Robust Knowledge Distillation from A Stronger Teacher Based on Correlation Matchingby Wenqi Niu,…
Progressive distillation induces an implicit curriculumby Abhishek Panigrahi, Bingbin Liu, Sadhika Malladi, Andrej Risteski, Surbhi…
Dataset Distillation via Knowledge Distillation: Towards Efficient Self-Supervised Pre-Training of Deep Networksby Siddharth Joshi, Jiayi…
PHI-S: Distribution Balancing for Label-Free Multi-Teacher Distillationby Mike Ranzinger, Jon Barker, Greg Heinrich, Pavlo Molchanov,…
Foldable SuperNets: Scalable Merging of Transformers with Different Initializations and Tasksby Edan Kinderman, Itay Hubara,…
Collaborative Knowledge Distillation via a Learning-by-Education Node Communityby Anestis Kaimakamidis, Ioannis Mademlis, Ioannis PitasFirst submitted…
Linear Projections of Teacher Embeddings for Few-Class Distillationby Noel Loo, Fotis Iliopoulos, Wei Hu, Erik…