Loading Now

Summary of An Empirical Analysis Of Federated Learning Models Subject to Label-flipping Adversarial Attack, by Kunal Bhatnagar and Sagana Chattanathan and Angela Dang and Bhargav Eranki and Ronnit Rana and Charan Sridhar and Siddharth Vedam and Angie Yao and Mark Stamp


An Empirical Analysis of Federated Learning Models Subject to Label-Flipping Adversarial Attack

by Kunal Bhatnagar, Sagana Chattanathan, Angela Dang, Bhargav Eranki, Ronnit Rana, Charan Sridhar, Siddharth Vedam, Angie Yao, Mark Stamp

First submitted to arxiv on: 24 Dec 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper investigates the effectiveness of various adversarial attacks on federated learning models, including Multinominal Logistic Regression (MLR), Support Vector Classifier (SVC), Multilayer Perceptron (MLP), Convolution Neural Network (CNN), Recurrent Neural Network (RNN), Random Forest, XGBoost, and Long Short-Term Memory (LSTM). The authors simulate label-flipping attacks on 10 and 100 federated clients, varying the percentage of adversarial clients from 10% to 100% and the percentage of labels flipped by each client. Results show that models differ in their robustness to different attack vectors. This study has implications for practical applications of federated learning.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper looks at how different kinds of attacks can affect machine learning models that work together across multiple devices or computers. The researchers tested several types of models, like logistic regression and neural networks, and simulated an attack where some fake information is added to the data. They tried this with different groups of devices and different amounts of fake information. The results show that each model type handles these attacks differently. This study can help us understand how to make machine learning systems safer.

Keywords

» Artificial intelligence  » Cnn  » Federated learning  » Logistic regression  » Lstm  » Machine learning  » Neural network  » Random forest  » Rnn  » Xgboost