Loading Now

Summary of Communication-efficient and Privacy-preserving Feature-based Federated Transfer Learning, by Feng Wang et al.


Communication-Efficient and Privacy-Preserving Feature-based Federated Transfer Learning

by Feng Wang, M. Cenk Gursoy, Senem Velipasalar

First submitted to arxiv on: 12 Sep 2022

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
In this paper, researchers propose a novel approach called feature-based federated transfer learning to improve communication efficiency in federated learning systems. Federated learning is a variant of machine learning that preserves clients’ privacy by only sharing model updates rather than raw data. The authors identify the need for efficient communication due to the limited radio spectrum and propose a method that reduces uplink payload by more than five orders of magnitude compared to existing approaches. They achieve this by uploading extracted features and outputs instead of parameter updates, which significantly reduces the amount of data transmitted. The paper also analyzes the random shuffling scheme used to preserve clients’ privacy and evaluates the performance of the proposed learning scheme on an image classification task.
Low GrooveSquid.com (original content) Low Difficulty Summary
Federated learning is a way for computers to work together without sharing personal information. It’s like a big team working together, but each member only shares small changes they’ve made, not their whole report. This helps keep people’s data safe. The problem is that sending these small changes takes up a lot of space and time, especially when working on big tasks. To fix this, researchers came up with an idea called feature-based federated transfer learning. Instead of sending tiny updates, they send only the most important information – like features or summaries – which reduces the amount of data sent by a huge amount! This helps make the process faster and more efficient.

Keywords

* Artificial intelligence  * Federated learning  * Image classification  * Machine learning  * Transfer learning