Loading Now

Summary of Byzantine-resilient Federated Learning with Adaptivity to Data Heterogeneity, by Shiyuan Zuo et al.


Byzantine-resilient Federated Learning With Adaptivity to Data Heterogeneity

by Shiyuan Zuo, Xingrun Yan, Rongfei Fan, Han Hu, Hangguan Shan, Tony Q. S. Quek

First submitted to arxiv on: 20 Mar 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A novel Robust Average Gradient Algorithm (RAGA) is proposed in this paper to address federated learning (FL) in the presence of malicious Byzantine attacks and data heterogeneity. The algorithm leverages the geometric median for aggregation and can freely select the round number for local updating, differentiating it from most existing resilient approaches. Convergence analysis is conducted for both strongly-convex and non-convex loss functions over heterogeneous datasets. Theoretical results show that RAGA achieves convergence at a rate of O(1/T^(2/3-δ)) for non-convex loss functions and linearly for strongly-convex loss functions, as long as the fraction of malicious users is less than half. Experimental results validate the robustness of RAGA to Byzantine attacks and its advantage over baselines in convergence performance under varying intensities of attacks.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper explores how to make federated learning work better when there are bad actors trying to mess things up, and when data isn’t uniform across all users. The researchers come up with a new way to do this called Robust Average Gradient Algorithm (RAGA). It’s like an averaging formula that works even if some of the data is fake or different from others. They show that their algorithm can make progress towards solving problems, even when there are bad actors trying to stop it, as long as not too many users are malicious. They also test it and see that it performs better than other methods in these situations.

Keywords

* Artificial intelligence  * Federated learning