Loading Now

Summary of Linkage on Security, Privacy and Fairness in Federated Learning: New Balances and New Perspectives, by Linlin Wang et al.


Linkage on Security, Privacy and Fairness in Federated Learning: New Balances and New Perspectives

by Linlin Wang, Tianqing Zhu, Wanlei Zhou, Philip S. Yu

First submitted to arxiv on: 16 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Cryptography and Security (cs.CR); Distributed, Parallel, and Cluster Computing (cs.DC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed survey aims to provide a comprehensive overview of the privacy, security, and fairness issues in federated learning, which is a rapidly growing paradigm for applications involving mobile devices, banking systems, healthcare, and IoT systems. The study reveals that there may be an intricate interplay between these three critical concepts, with some researchers discovering that pursuing fairness can compromise privacy or that efforts to enhance security can impact fairness. By delving deeper into these interconnections, the survey highlights the fundamental connections between privacy, security, and fairness within federated learning, which could significantly augment research and development across the field.
Low GrooveSquid.com (original content) Low Difficulty Summary
Federated learning is a new way for devices like smartphones and smart home appliances to work together without sharing their data. This helps keep our personal information safe from hackers. But there’s a problem: it’s not clear how we can balance keeping our data private with making sure the models are fair and secure. Some people think that if we prioritize fairness, we might lose some privacy. Others believe that trying to make the models more secure could actually make them less fair. The study wants to understand these trade-offs better so we can build models that are both private and secure.

Keywords

* Artificial intelligence  * Federated learning