Loading Now

Summary of Optimisation Of Federated Learning Settings Under Statistical Heterogeneity Variations, by Basem Suleiman et al.


Optimisation of federated learning settings under statistical heterogeneity variations

by Basem Suleiman, Muhammad Johan Alibasa, Rizka Widyarini Purwanto, Lewis Jeffries, Ali Anaissi, Jacky Song

First submitted to arxiv on: 10 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper investigates Federated Learning (FL), a collaborative learning approach where devices share model parameters periodically. The authors highlight that FL can be affected by statistical heterogeneity due to diverse data distributions on local devices, leading to varying Independent and Identically Distributed (IID) data levels. To address this challenge, they propose a systematic data partition strategy to simulate different IID levels and introduce a metric to measure IID. An empirical analysis is conducted on three datasets using various FL training parameters and aggregators, identifying the best FL model and key parameters for datasets with distinct characteristics. The authors provide guidelines for optimising FL model performance under different IID levels and with diverse datasets.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps us understand how devices can work together to learn new things without sharing their data. When this happens, each device might have a different way of looking at the same problem, which can make it harder to get accurate results. To solve this problem, researchers developed ways to simulate different levels of diversity and created a tool to measure how much data is alike or not. They tested these ideas on three sets of data and found the best combination of methods for each set. This will help devices learn better together in the future.

Keywords

» Artificial intelligence  » Federated learning