Loading Now

Summary of Accuracy-privacy Trade-off in the Mitigation Of Membership Inference Attack in Federated Learning, by Sayyed Farid Ahamed et al.


Accuracy-Privacy Trade-off in the Mitigation of Membership Inference Attack in Federated Learning

by Sayyed Farid Ahamed, Soumya Banerjee, Sandip Roy, Devin Quinn, Marc Vucovich, Kevin Choi, Abdul Rahman, Alison Hu, Edward Bowen, Sachin Shetty

First submitted to arxiv on: 26 Jul 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper investigates the relationship between federated learning (FL) and deep ensembles in machine learning. FL allows multiple clients to collaborate on a model while keeping their training data private, but models are vulnerable to attacks like membership inference attacks (MIAs). The study reveals an accuracy-privacy trade-off in FL, where increasing the number of clients can lead to improved accuracy but compromised privacy. The authors experiment with different numbers of clients, datasets, and fusion strategies, finding that the trade-off is non-monotonic, meaning that adding more clients does not always improve or worsen the balance between accuracy and privacy. The findings have implications for designing FL models that balance accuracy and privacy.
Low GrooveSquid.com (original content) Low Difficulty Summary
Imagine a way for different groups to work together on a machine learning project without sharing their own data. This is called federated learning (FL). But there’s a problem – if someone wants to know which group contributed what, they can attack the model and find out. The study looks at how FL works with deep ensembles, a way to combine many models into one. They found that making the model more accurate often makes it less private, and vice versa. This has big implications for how we design these models in the future.

Keywords

» Artificial intelligence  » Federated learning  » Inference  » Machine learning