Summary of Achieving Dimension-free Communication in Federated Learning Via Zeroth-order Optimization, by Zhe Li et al.
Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimizationby Zhe Li, Bicheng Ying, Zidong Liu,…
Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimizationby Zhe Li, Bicheng Ying, Zidong Liu,…
Robust width: A lightweight and certifiable adversarial defenseby Jonathan Peck, Bart GoossensFirst submitted to arxiv…
Enhancing Visual-Language Modality Alignment in Large Vision Language Models via Self-Improvementby Xiyao Wang, Jiuhai Chen,…
Towards Natural Machine Unlearningby Zhengbao He, Tao Li, Xinwen Cheng, Zhehao Huang, Xiaolin HuangFirst submitted…
Mosaic Memory: Fuzzy Duplication in Copyright Traps for Large Language Modelsby Igor Shilov, Matthieu Meeus,…
NuwaTS: a Foundation Model Mending Every Incomplete Time Seriesby Jinguo Cheng, Chunwei Yang, Wanlin Cai,…
BiSup: Bidirectional Quantization Error Suppression for Large Language Modelsby Minghui Zou, Ronghui Guo, Sai Zhang,…
BDetCLIP: Multimodal Prompting Contrastive Test-Time Backdoor Detectionby Yuwei Niu, Shuo He, Qi Wei, Zongyu Wu,…
Prompt Tuning Strikes Back: Customizing Foundation Models with Low-Rank Prompt Adaptationby Abhinav Jain, Swarat Chaudhuri,…
MallowsPO: Fine-Tune Your LLM with Preference Dispersionsby Haoxian Chen, Hanyang Zhao, Henry Lam, David Yao,…