Loading Now

Summary of Embedding Byzantine Fault Tolerance Into Federated Learning Via Virtual Data-driven Consistency Scoring Plugin, by Youngjoon Lee et al.


Embedding Byzantine Fault Tolerance into Federated Learning via Virtual Data-Driven Consistency Scoring Plugin

by Youngjoon Lee, Jinu Gong, Joonhyuk Kang

First submitted to arxiv on: 15 Nov 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed plugin integrates with existing federated learning techniques to achieve Byzantine-resilience against compromised edge devices. The key idea is to generate virtual data samples and evaluate model consistency scores across local updates, effectively filtering out compromised devices. This scoring mechanism is used before the aggregation phase, enabling the original FL benefits while maintaining robustness against attacks. Numerical results on medical image classification validate the effectiveness of this approach with representative FL algorithms.
Low GrooveSquid.com (original content) Low Difficulty Summary
Federated learning allows many devices to work together without sharing sensitive information. But what if some of these devices are trying to trick the system? In this paper, researchers propose a way to make federated learning more secure by detecting and removing fake data from compromised devices. They do this by creating virtual data samples and checking how well the model works on them. This helps keep the model honest and ensures that it’s not affected by bad data. The results show that this approach works well with real-world examples of medical image classification.

Keywords

» Artificial intelligence  » Federated learning  » Image classification