Summary of Fedreview: a Review Mechanism For Rejecting Poisoned Updates in Federated Learning, by Tianhang Zheng and Baochun Li
FedReview: A Review Mechanism for Rejecting Poisoned Updates in Federated Learning
by Tianhang Zheng, Baochun Li
First submitted to arxiv on: 26 Feb 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary In this paper, researchers develop a review mechanism called FedReview to detect and reject malicious updates in federated learning systems. Federated learning allows for decentralized training of AI models without sharing user data. However, this approach can be exploited by attackers uploading poisoned model updates. FedReview addresses this issue by randomly selecting clients as reviewers who evaluate model updates on their local datasets. The server then aggregates the reviews using a majority voting mechanism to remove potential poisoned updates. This system is tested on multiple datasets and shown to effectively learn well-performing global models in adversarial environments. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Federated learning lets computers learn together without sharing personal information. But some bad actors can try to cheat by sending fake model updates. To stop this, the authors created a new way called FedReview. It works by picking some computers at random to be reviewers who check if the updates are good or bad. Then, the main computer looks at all the reviews and decides which updates to keep or throw away. This helps make sure the AI learns correctly even when people try to trick it. |
Keywords
* Artificial intelligence * Federated learning