Loading Now

Summary of Scala: Split Federated Learning with Concatenated Activations and Logit Adjustments, by Jiarong Yang and Yuan Liu


SCALA: Split Federated Learning with Concatenated Activations and Logit Adjustments

by Jiarong Yang, Yuan Liu

First submitted to arxiv on: 8 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
In this paper, researchers introduce Split Federated Learning (SFL) as a distributed machine learning framework that trains models collaboratively between a server and clients. However, data heterogeneity and client participation issues can lead to label distribution skew, hindering performance. To address this challenge, the authors propose SFL with Concatenated Activations and Logit Adjustments (SCALA), which combines activations from client-side models as input for the server-side model and adjusts logit functions to account for label distribution variations. Theoretical analysis and experiments demonstrate the effectiveness of SCALA on public datasets.
Low GrooveSquid.com (original content) Low Difficulty Summary
SFL is a way to train machine learning models together with many devices or computers. But sometimes, the data and participation can be different, making it hard to get good results. To fix this, researchers came up with SFL with Concatenated Activations and Logit Adjustments (SCALA). SCALA takes the information from each device and combines it to help adjust for differences in data. This makes training better. The paper shows that SCALA works well on public datasets.

Keywords

» Artificial intelligence  » Federated learning  » Machine learning