Summary of Thinking Forward: Memory-efficient Federated Finetuning Of Language Models, by Kunjal Panchal et al.
Thinking Forward: Memory-Efficient Federated Finetuning of Language Modelsby Kunjal Panchal, Nisarg Parikh, Sunav Choudhary, Lijun…
Thinking Forward: Memory-Efficient Federated Finetuning of Language Modelsby Kunjal Panchal, Nisarg Parikh, Sunav Choudhary, Lijun…
DAGER: Exact Gradient Inversion for Large Language Modelsby Ivo Petrov, Dimitar I. Dimitrov, Maximilian Baader,…
Towards Client Driven Federated Learningby Songze Li, Chenqing ZhuFirst submitted to arxiv on: 24 May…
FedCal: Achieving Local and Global Calibration in Federated Learning via Aggregated Parameterized Scalerby Hongyi Peng,…
Decaf: Data Distribution Decompose Attack against Federated Learningby Zhiyang Dai, Chunyi Zhou, Anmin FuFirst submitted…
Recurrent Early Exits for Federated Learning with Heterogeneous Clientsby Royson Lee, Javier Fernandez-Marques, Shell Xu…
Overcoming the Challenges of Batch Normalization in Federated Learningby Rachid Guerraoui, Rafael Pinot, Geovani Rizk,…
Variational Bayes for Federated Continual Learningby Dezhong Yao, Sanmu Li, Yutong Dai, Zhiqiang Xu, Shengshan…
Rehearsal-free Federated Domain-incremental Learningby Rui Sun, Haoran Duan, Jiahua Dong, Varun Ojha, Tejal Shah, Rajiv…
CG-FedLLM: How to Compress Gradients in Federated Fune-tuning for Large Language Modelsby Huiwen Wu, Xiaohan…