Loading Now

Summary of Pfattack: Stealthy Attack Bypassing Group Fairness in Federated Learning, by Jiashi Gao et al.


PFAttack: Stealthy Attack Bypassing Group Fairness in Federated Learning

by Jiashi Gao, Ziwei Wang, Xiangyu Zhao, Xin Yao, Xuetao Wei

First submitted to arxiv on: 9 Oct 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Federated learning (FL) integrates group fairness mechanisms to train a global model that makes unbiased decisions for different populations. However, previous studies have demonstrated that FL systems are vulnerable to model poisoning attacks. This paper explores the critical question: Can an attacker bypass the group fairness mechanisms in FL and manipulate the global model to be biased? The proposed Profit-driven Fairness Attack (PFATTACK) targets not degrading accuracy but bypassing fairness mechanisms. PFATTACK aims to recover dependence on sensitive information attributes through local fine-tuning across groups, creating a biased yet accuracy-preserving malicious model. This attack is more stealthy than those targeting accuracy, exhibiting subtle parameter variations relative to the original global model. The effectiveness of PFATTACK in bypassing group fairness mechanisms is demonstrated through extensive experiments on benchmark datasets for four fair FL frameworks and three Byzantine-resilient aggregations against model poisoning.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper looks at a new type of attack called Profit-driven Fairness Attack (PFATTACK). It’s an attack that tries to make the global model in Federated Learning biased, but not worse. In other words, it doesn’t try to make the model incorrect, just unfair. The attackers want to use this attack to get away with being unfair while still making the model work well. They tested this attack on four different types of Federated Learning and found that it worked in all cases. This is a problem because it means that the fairness mechanisms in Federated Learning are not good enough, and we need to find ways to make them better.

Keywords

» Artificial intelligence  » Federated learning  » Fine tuning