Summary of Adversarial Training Via Adaptive Knowledge Amalgamation Of An Ensemble Of Teachers, by Shayan Mohajer Hamidi et al.
Adversarial Training via Adaptive Knowledge Amalgamation of an Ensemble of Teachersby Shayan Mohajer Hamidi, Linfeng…
Adversarial Training via Adaptive Knowledge Amalgamation of an Ensemble of Teachersby Shayan Mohajer Hamidi, Linfeng…
FedCache 2.0: Federated Edge Learning with Knowledge Caching and Dataset Distillationby Quyang Pan, Sheng Sun,…
Comparative Analysis of Different Efficient Fine Tuning Methods of Large Language Models (LLMs) in Low-Resource…
DINO as a von Mises-Fisher mixture modelby Hariprasath Govindarajan, Per Sidén, Jacob Roll, Fredrik LindstenFirst…
Flow Score Distillation for Diverse Text-to-3D Generationby Runjie Yan, Kailu Wu, Kaisheng MaFirst submitted to…
Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transformersby Sheng Yang,…
Densely Distilling Cumulative Knowledge for Continual Learningby Zenglin Shi, Pei Liu, Tong Su, Yunpeng Wu,…
SynthesizRR: Generating Diverse Datasets with Retrieval Augmentationby Abhishek Divekar, Greg DurrettFirst submitted to arxiv on:…
Importance-Aware Adaptive Dataset Distillationby Guang Li, Ren Togo, Takahiro Ogawa, Miki HaseyamaFirst submitted to arxiv…