Summary of Unified Parameter-efficient Unlearning For Llms, by Chenlu Ding et al.
Unified Parameter-Efficient Unlearning for LLMsby Chenlu Ding, Jiancan Wu, Yancheng Yuan, Jinda Lu, Kai Zhang,…
Unified Parameter-Efficient Unlearning for LLMsby Chenlu Ding, Jiancan Wu, Yancheng Yuan, Jinda Lu, Kai Zhang,…
On Foundation Models for Dynamical Systems from Purely Synthetic Databy Martin Ziegler, Andres Felipe Posada-Moreno,…
STEP: Enhancing Video-LLMs’ Compositional Reasoning by Spatio-Temporal Graph-guided Self-Trainingby Haiyi Qiu, Minghe Gao, Long Qian,…
Energy-Efficient Split Learning for Fine-Tuning Large Language Models in Edge Networksby Zuguang Li, Shaohua Wu,…
FonTS: Text Rendering with Typography and Style Controlsby Wenda Shi, Yiren Song, Dengming Zhang, Jiaming…
Less is More: Efficient Model Merging with Binary Task Switchby Biqing Qi, Fangyuan Li, Zhen…
Condense, Don’t Just Prune: Enhancing Efficiency and Performance in MoE Layer Pruningby Mingyu Cao, Gen…
Reverse Thinking Makes LLMs Stronger Reasonersby Justin Chih-Yao Chen, Zifeng Wang, Hamid Palangi, Rujun Han,…
Initialization using Update Approximation is a Silver Bullet for Extremely Efficient Low-Rank Fine-Tuningby Kaustubh Ponkshe,…
SURE-VQA: Systematic Understanding of Robustness Evaluation in Medical VQA Tasksby Kim-Celine Kahl, Selen Erkan, Jeremias…