Loading Now

Summary of Flea: Addressing Data Scarcity and Label Skew in Federated Learning Via Privacy-preserving Feature Augmentation, by Tong Xia et al.


FLea: Addressing Data Scarcity and Label Skew in Federated Learning via Privacy-preserving Feature Augmentation

by Tong Xia, Abhirup Ghosh, Xinchi Qiu, Cecilia Mascolo

First submitted to arxiv on: 13 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Distributed, Parallel, and Cluster Computing (cs.DC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes a novel Federated Learning (FL) framework called FLea to address the challenges of local model overfitting and drift when dealing with scarce and label-skewed data across devices. The framework incorporates three key components: a global feature buffer, feature augmentation based on activation mix-ups, and an obfuscation method to enhance privacy. The authors conduct extensive experiments using various data modalities and demonstrate that FLea consistently outperforms state-of-the-art FL methods while mitigating privacy vulnerabilities.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper is about a new way for machines to learn together without sharing their personal data. It’s called Federated Learning, or FL. Right now, this kind of learning has some big problems when the devices have very little data and it’s not well-labeled. This makes the models on those devices get too specialized and don’t work well with the main model. The new framework, FLea, tries to fix these issues by sharing important features between devices, making sure the local models don’t get too good at one thing, and keeping the shared data safe.

Keywords

* Artificial intelligence  * Federated learning  * Overfitting