Summary of Defending Against Sophisticated Poisoning Attacks with Rl-based Aggregation in Federated Learning, by Yujing Wang et al.
Defending Against Sophisticated Poisoning Attacks with RL-based Aggregation in Federated Learning
by Yujing Wang, Hainan Zhang, Sijia Wen, Wangjie Qiu, Binghui Guo
First submitted to arxiv on: 20 Jun 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Cryptography and Security (cs.CR)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The abstract discusses federated learning’s vulnerability to model poisoning attacks and proposes a new adaptive defense method, AdaAggRL, which uses reinforcement learning (RL) to identify malicious clients and adaptively aggregate their contributions. The authors note that benign clients exhibit higher data distribution stability than malicious ones in computer vision (CV) and natural language processing (NLP) tasks. They then use the maximum mean discrepancy (MMD) to calculate similarities between local, historical, and global model data distributions. This information is used to adaptively determine aggregation weights using policy learning. The proposed method outperforms traditional defense methods on four real-world datasets. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Federated learning has a big problem: it’s easy for bad guys to mess with the system. Right now, people are working on ways to fix this, but they’re not doing very well against sneaky attacks. So, we need something new and better. This paper suggests using special algorithms called reinforcement learning (RL) to help identify the bad guys and stop them from ruining the system. It works by looking at how different clients are sharing their data and finding patterns that show who’s good or bad. Then, it uses this information to decide whether or not to use someone’s data in the main system. The authors tested this new method on four real-world datasets and found that it worked much better than what people were doing before. |
Keywords
» Artificial intelligence » Federated learning » Natural language processing » Nlp » Reinforcement learning