Summary of Cliperase: Efficient Unlearning Of Visual-textual Associations in Clip, by Tianyu Yang et al.
CLIPErase: Efficient Unlearning of Visual-Textual Associations in CLIPby Tianyu Yang, Lisen Dai, Zheyuan Liu, Xiangqi…
CLIPErase: Efficient Unlearning of Visual-Textual Associations in CLIPby Tianyu Yang, Lisen Dai, Zheyuan Liu, Xiangqi…
MoLE: Enhancing Human-centric Text-to-image Diffusion via Mixture of Low-rank Expertsby Jie Zhu, Yixiong Chen, Mingyu…
Sequential Order-Robust Mamba for Time Series Forecastingby Seunghan Lee, Juri Hong, Kibok Lee, Taeyoung ParkFirst…
Controllable Game Level Generation: Assessing the Effect of Negative Examples in GAN Modelsby Mahsa Bazzaz,…
Exploring Gradient Subspaces: Addressing and Overcoming LoRA’s Limitations in Federated Fine-Tuning of Large Language Modelsby…
Unified Triplet-Level Hallucination Evaluation for Large Vision-Language Modelsby Junjie Wu, Tsz Ting Chung, Kai Chen,…
Why Fine-grained Labels in Pretraining Benefit Generalization?by Guan Zhe Hong, Yin Cui, Ariel Fuxman, Stanley…
Federated Learning under Periodic Client Participation and Heterogeneous Data: A New Communication-Efficient Algorithm and Analysisby…
Revisiting MAE pre-training for 3D medical image segmentationby Tassilo Wald, Constantin Ulrich, Stanislav Lukyanenko, Andrei…
Provably Optimal Memory Capacity for Modern Hopfield Models: Transformer-Compatible Dense Associative Memories as Spherical Codesby…