Summary of Lines: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging, by Ke Wang et al.
LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Mergingby Ke Wang, Nikolaos Dimitriadis, Alessandro…
LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Mergingby Ke Wang, Nikolaos Dimitriadis, Alessandro…
LVSM: A Large View Synthesis Model with Minimal 3D Inductive Biasby Haian Jin, Hanwen Jiang,…
Efficient Frequency Selective Surface Analysis via End-to-End Model-Based Learningby Cheima Hammami, Lucas Polo-López, Luc Le…
Evaluating the Effectiveness of Attack-Agnostic Features for Morphing Attack Detectionby Laurent Colbois, Sébastien MarcelFirst submitted…
Fast Graph Sharpness-Aware Minimization for Enhancing and Accelerating Few-Shot Node Classificationby Yihong Luo, Yuhan Chen,…
Rethinking generalization of classifiers in separable classes scenarios and over-parameterized regimesby Julius Martinetz, Christoph Linse,…
GALA: Graph Diffusion-based Alignment with Jigsaw for Source-free Domain Adaptationby Junyu Luo, Yiyang Gu, Xiao…
In Search of the Successful Interpolation: On the Role of Sharpness in CLIP Generalizationby Alireza…
Simplicity Bias via Global Convergence of Sharpness Minimizationby Khashayar Gatmiry, Zhiyuan Li, Sashank J. Reddi,…
Towards Combating Frequency Simplicity-biased Learning for Domain Generalizationby Xilin He, Jingyu Hu, Qinliang Lin, Cheng…