Summary of Gradients Stand-in For Defending Deep Leakage in Federated Learning, by H. Yi et al.
Gradients Stand-in for Defending Deep Leakage in Federated Learningby H. Yi, H. Ren, C. Hu,…
Gradients Stand-in for Defending Deep Leakage in Federated Learningby H. Yi, H. Ren, C. Hu,…
Unlocking FedNL: Self-Contained Compute-Optimized Implementationby Konstantin Burlachenko, Peter RichtárikFirst submitted to arxiv on: 11 Oct…
GAI-Enabled Explainable Personalized Federated Semi-Supervised Learningby Yubo Peng, Feibo Jiang, Li Dong, Kezhi Wang, Kun…
DistDD: Distributed Data Distillation Aggregation through Gradient Matchingby Peiran Wang, Haohan WangFirst submitted to arxiv…
Randomized Asymmetric Chain of LoRA: The First Meaningful Theoretical Framework for Low-Rank Adaptationby Grigory Malinovsky,…
Scalable and Resource-Efficient Second-Order Federated Learning via Over-the-Air Aggregationby Abdulmomen Ghalkha, Chaouki Ben Issaid, Mehdi…
FedEP: Tailoring Attention to Heterogeneous Data Distribution with Entropy Pooling for Decentralized Federated Learningby Chao…
Boosting the Performance of Decentralized Federated Learning via Catalyst Accelerationby Qinglun Li, Miao Zhang, Yingqi…
Benchmarking Data Heterogeneity Evaluation Approaches for Personalized Federated Learningby Zhilong Li, Xiaohu Wu, Xiaoli Tang,…
Distributionally Robust Clustered Federated Learning: A Case Study in Healthcareby Xenia Konti, Hans Riess, Manos…