Loading Now

Summary of Joint Energy and Latency Optimization in Federated Learning Over Cell-free Massive Mimo Networks, by Afsaneh Mahmoudi et al.


Joint Energy and Latency Optimization in Federated Learning over Cell-Free Massive MIMO Networks

by Afsaneh Mahmoudi, Mahmoud Zaher, Emil Björnson

First submitted to arxiv on: 28 Apr 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Information Theory (cs.IT); Networking and Internet Architecture (cs.NI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A federated learning (FL) paradigm that enables users to share models rather than raw data, preserving privacy and reducing communication overhead, is studied. However, the increased number of users may hinder large-scale FL over wireless networks due to imposed latency. To address this, cell-free massive multiple-input multiple-output (CFmMIMO) architecture is employed, which enhances energy efficiency through spatial multiplexing and collaborative beamforming. The proposed solution for uplink power allocation in FL over CFmMIMO considers the impact of each user’s power on the energy and latency of others to jointly minimize uplink energy and training latency. The coordinate gradient descent method is used as the basis for the algorithm. Numerical results show that the proposed method outperforms existing solutions, achieving up to 27% higher test accuracy with limited uplink energy and latency budget.
Low GrooveSquid.com (original content) Low Difficulty Summary
Federated learning lets people share models instead of data, keeping information private and reducing internet traffic. But when many people join, it can slow down training because of the extra delay. Cell-free massive MIMO helps by making things more efficient. The challenge is to decide how much power each person should use when sharing their model. A new way to do this was developed, taking into account how each person’s choice affects others. This method worked better than other approaches, giving a 27% boost in accuracy while using less energy and keeping the delay low.

Keywords

» Artificial intelligence  » Federated learning  » Gradient descent