Summary of Flguard: Byzantine-robust Federated Learning Via Ensemble Of Contrastive Models, by Younghan Lee et al.
FLGuard: Byzantine-Robust Federated Learning via Ensemble of Contrastive Models
by Younghan Lee, Yungi Cho, Woorim Han, Ho Bae, Yunheung Paek
First submitted to arxiv on: 5 Mar 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR); Distributed, Parallel, and Cluster Computing (cs.DC)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper proposes a novel federated learning (FL) method, FLGuard, that detects malicious clients and discards malicious local updates using the contrastive learning technique. This approach is designed to improve the defensive capability of FLGuard as an ensemble scheme. The method is evaluated under various poisoning attacks and compared with existing byzantine-robust FL methods. Results show that FLGuard outperforms state-of-the-art defense methods in most cases, especially in non-IID settings. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Federated learning helps train a global model without sharing private data from clients’ local models. But, some bad actors can make the global model very wrong by pretending to be good clients. To fix this, researchers suggested ways to make FL more robust against these attacks. The problem is that many of these methods need extra information or work only when the private data is similar across all clients. This paper proposes a new method called FLGuard that uses a technique called contrastive learning to detect and reject bad updates from malicious clients. It’s an ensemble approach that combines multiple models to make it even better at detecting attacks. The results show that FLGuard is very good at defending against these attacks, especially when the private data isn’t the same for all clients. |
Keywords
* Artificial intelligence * Federated learning