Summary of Multi-modal Vision Pre-training For Medical Image Analysis, by Shaohao Rui et al.
Multi-modal Vision Pre-training for Medical Image Analysisby Shaohao Rui, Lingzhi Chen, Zhenyu Tang, Lilong Wang,…
Multi-modal Vision Pre-training for Medical Image Analysisby Shaohao Rui, Lingzhi Chen, Zhenyu Tang, Lilong Wang,…
Self-Data Distillation for Recovering Quality in Pruned Large Language Modelsby Vithursan Thangarasa, Ganesh Venkatesh, Mike…
Distilling Invariant Representations with Dual Augmentationby Nikolaos Giakoumoglou, Tania StathakiFirst submitted to arxiv on: 12…
Simultaneous Reward Distillation and Preference Learning: Get You a Language Model Who Can Do Bothby…
What is Left After Distillation? How Knowledge Transfer Impacts Fairness and Biasby Aida Mohammadshahi, Yani…
Evolutionary Contrastive Distillation for Language Model Alignmentby Julian Katz-Samuels, Zheng Li, Hyokun Yun, Priyanka Nigam,…
Degree Distribution based Spiking Graph Networks for Domain Adaptationby Yingxu Wang, Mengzhu Wang, Siwei Liu,…
Joint Fine-tuning and Conversion of Pretrained Speech and Language Models towards Linear Complexityby Mutian He,…
Convex Distillation: Efficient Compression of Deep Networks via Convex Optimizationby Prateek Varshney, Mert PilanciFirst submitted…
Towards Self-Improvement of LLMs via MCTS: Leveraging Stepwise Knowledge with Curriculum Preference Learningby Xiyao Wang,…