Summary of Multi-model Based Federated Learning Against Model Poisoning Attack: a Deep Learning Based Model Selection For Mec Systems, by Somayeh Kianpisheh et al.
Multi-Model based Federated Learning Against Model Poisoning Attack: A Deep Learning Based Model Selection for MEC Systems
by Somayeh Kianpisheh, Chafika Benzaid, Tarik Taleb
First submitted to arxiv on: 12 Sep 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Networking and Internet Architecture (cs.NI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes a multi-model approach to Federated Learning (FL) as a proactive mechanism to mitigate model poisoning attacks. The existing singular-model based FL operation can be exploited by uploading poisoned models compatible with the global model structure, making it vulnerable to attack. To address this issue, the authors introduce a master-slave model framework where multiple client models are trained and their structures dynamically change within learning epochs. This approach is supported by a novel FL protocol that enhances the opportunity of attack mitigation. The paper also explores the application of deep reinforcement learning for dynamic network condition adaptation in MEC systems, demonstrating its effectiveness in recognizing attacks and improving recognition time. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper makes it harder for hackers to damage models used in online learning. Currently, if someone uploads a fake model that matches the real one, it can cause problems. To fix this, researchers developed a system where many smaller models are trained and their structures change often. This makes it harder for attackers to create fake models that work well with the main model. The authors also showed how this approach can be used in situations like detecting denial-of-service attacks on networks, resulting in better accuracy and faster recognition times. |
Keywords
» Artificial intelligence » Federated learning » Online learning » Reinforcement learning