Summary of Radiov2.5: Improved Baselines For Agglomerative Vision Foundation Models, by Greg Heinrich et al.
RADIOv2.5: Improved Baselines for Agglomerative Vision Foundation Modelsby Greg Heinrich, Mike Ranzinger, Hongxu, Yao Lu,…
RADIOv2.5: Improved Baselines for Agglomerative Vision Foundation Modelsby Greg Heinrich, Mike Ranzinger, Hongxu, Yao Lu,…
UMSPU: Universal Multi-Size Phase Unwrapping via Mutual Self-Distillation and Adaptive Boosting Ensemble Segmentersby Lintong Du,…
Diffusion-Augmented Coreset Expansion for Scalable Dataset Distillationby Ali Abbasi, Shima Imani, Chenyang An, Gayathri Mahalingam,…
Federated Progressive Self-Distillation with Logits Calibration for Personalized IIoT Edge Intelligenceby Yingchao Wang, Wenqi NiuFirst…
SVGDreamer++: Advancing Editability and Diversity in Text-Guided SVG Generationby Ximing Xing, Qian Yu, Chuang Wang,…
O1 Replication Journey – Part 2: Surpassing O1-preview through Simple Distillation, Big Progress or Bitter…
When Babies Teach Babies: Can student knowledge sharing outperform Teacher-Guided Distillation on small datasets?by Srikrishna…
Adversarial Prompt Distillation for Vision-Language Modelsby Lin Luo, Xin Wang, Bojia Zi, Shihao Zhao, Xingjun…
Improving Mathematical Reasoning Capabilities of Small Language Models via Feedback-Driven Distillationby Xunyu Zhu, Jian Li,…
Bi-Mamba: Towards Accurate 1-Bit State Space Modelsby Shengkun Tang, Liqun Ma, Haonan Li, Mingjie Sun,…