Loading Now

Summary of Aggressive or Imperceptible, or Both: Network Pruning Assisted Hybrid Byzantines in Federated Learning, by Emre Ozfatura and Kerem Ozfatura and Alptekin Kupcu and Deniz Gunduz


Aggressive or Imperceptible, or Both: Network Pruning Assisted Hybrid Byzantines in Federated Learning

by Emre Ozfatura, Kerem Ozfatura, Alptekin Kupcu, Deniz Gunduz

First submitted to arxiv on: 9 Apr 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Cryptography and Security (cs.CR); Distributed, Parallel, and Cluster Computing (cs.DC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
In this paper, researchers tackle the security risks associated with federated learning (FL), a technology that enables multiple devices to collaborate on training machine learning models without sharing their data. While FL allows for more accurate model training by leveraging local data from each device, it also increases the risk of malicious clients poisoning the model during training. To mitigate this threat, the authors propose strategies for defending against Byzantine attacks in FL. Specifically, they analyze the topology of neural networks (NNs) and develop methods to minimize the impact of malicious clients on model accuracy.
Low GrooveSquid.com (original content) Low Difficulty Summary
Federated learning is a way for many devices to work together to create a better machine learning model without sharing their data. This helps keep the data private, which is important because some data might be sensitive or confidential. The problem is that with so many devices working together, it can be hard to tell if one of them is trying to trick the system by sending bad information. This could make the model less accurate or even break it. To fix this, scientists have been looking at ways to stop these bad actors from causing problems. However, they haven’t always considered how neural networks (which are special kinds of computer programs) work when dealing with these attacks.

Keywords

» Artificial intelligence  » Federated learning  » Machine learning