Summary of Fedmid: a Data-free Method For Using Intermediate Outputs As a Defense Mechanism Against Poisoning Attacks in Federated Learning, by Sungwon Han et al.
FedMID: A Data-Free Method for Using Intermediate Outputs as a Defense Mechanism Against Poisoning Attacks in Federated Learning
by Sungwon Han, Hyeonho Song, Sungwon Park, Meeyoung Cha
First submitted to arxiv on: 18 Apr 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Cryptography and Security (cs.CR)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary In this paper, researchers tackle the problem of poisoning attacks in federated learning, where local updates from clients are combined to produce a global model. They propose a new defense strategy that uses functional mappings of local models based on intermediate outputs, rather than relying on Euclidean projections. This approach aims to better capture the functionality and structure of local models, leading to more consistent performance. The authors demonstrate the effectiveness of their mechanism in experiments under various computing conditions and advanced attack scenarios, making federated learning a safer option for data-sensitive participants. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Federated learning is a way for different devices or computers to work together and share information without having to send all of their data to one place. But sometimes, someone might try to sabotage this process by sending fake updates that can harm the final result. To prevent this, researchers have been working on ways to defend against these attacks. In this paper, they propose a new method that works differently from previous approaches and is able to catch and stop these malicious updates. |
Keywords
» Artificial intelligence » Federated learning