Summary of Overcoming the Pitfalls Of Vision-language Model Finetuning For Ood Generalization, by Yuhang Zang et al.
Overcoming the Pitfalls of Vision-Language Model Finetuning for OOD Generalizationby Yuhang Zang, Hanlin Goh, Josh…
Overcoming the Pitfalls of Vision-Language Model Finetuning for OOD Generalizationby Yuhang Zang, Hanlin Goh, Josh…
Divide and Conquer: Rethinking the Training Paradigm of Neural Radiance Fieldsby Rongkai Ma, Leo Lebrat,…
Distilling Mathematical Reasoning Capabilities into Small Language Modelsby Xunyu Zhu, Jian Li, Yong Liu, Can…
xCoT: Cross-lingual Instruction Tuning for Cross-lingual Chain-of-Thought Reasoningby Linzheng Chai, Jian Yang, Tao Sun, Hongcheng…
Graph Relation Distillation for Efficient Biomedical Instance Segmentationby Xiaoyu Liu, Yueyi Zhang, Zhiwei Xiong, Wei…
Less is More: A Closer Look at Semantic-based Few-Shot Learningby Chunpeng Zhou, Haishuai Wang, Xilu…
Know Your Needs Better: Towards Structured Understanding of Marketer Demands with Analogical Reasoning Augmented LLMsby…
Federated Hybrid Training and Self-Adversarial Distillation: Towards Robust Edge Networksby Yu Qiao, Apurba Adhikary, Kitae…
MoPD: Mixture-of-Prompts Distillation for Vision-Language Modelsby Yang Chen, Shuai Fu, Yu ZhangFirst submitted to arxiv…
Advanced Knowledge Transfer: Refined Feature Distillation for Zero-Shot Quantization in Edge Computingby Inpyo Hong, Youngwan…