Loading Now

Summary of Age Aware Scheduling For Differentially-private Federated Learning, by Kuan-yu Lin et al.


Age Aware Scheduling for Differentially-Private Federated Learning

by Kuan-Yu Lin, Hsuan-Yin Lin, Yu-Pin Hsu, Yu-Chih Huang

First submitted to arxiv on: 9 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Cryptography and Security (cs.CR); Distributed, Parallel, and Cluster Computing (cs.DC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper investigates how to combine differentially-private training and federated learning across databases that change over time, while balancing three competing factors: age, accuracy, and differential privacy (DP). The authors propose an optimization problem to meet DP requirements while minimizing the difference in model performance with or without DP constraints. To leverage scheduling advantages, they introduce an age-dependent upper bound on loss, leading to an age-aware scheduling design. Simulation results show that their proposed scheme outperforms traditional FL with classic DP, which ignores scheduling considerations. This research provides insights into the interplay of age, accuracy, and DP in federated learning, with practical implications for scheduling strategies.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper looks at how to train a model using data from different sources while keeping that data private. It’s like sharing a secret recipe among friends who trust each other. The authors want to find the best way to balance three important things: making sure the data is kept private, getting accurate results, and considering how long ago the data was collected. They propose a new way of thinking about this problem by introducing an upper limit on how much difference there can be between the model they train and one that doesn’t have these privacy concerns. This approach helps them develop a better scheduling strategy to make sure everything runs smoothly. The results show that their method is more effective than traditional approaches, which don’t consider these factors.

Keywords

» Artificial intelligence  » Federated learning  » Optimization