Loading Now

Summary of Bayesian Federated Model Compression For Communication and Computation Efficiency, by Chengyu Xia et al.


Bayesian Federated Model Compression for Communication and Computation Efficiency

by Chengyu Xia, Danny H. K. Tsang, Vincent K. N. Lau

First submitted to arxiv on: 11 Apr 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Distributed, Parallel, and Cluster Computing (cs.DC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper explores Bayesian model compression in federated learning (FL) to create sparse models that achieve efficient communication and computation. The authors propose a decentralized Turbo variational Bayesian inference (D-Turbo-VBI) FL framework, which includes a hierarchical sparse prior to promote clustered sparsity in the weight matrix. They then integrate message passing and VBI with a decentralized turbo framework to develop the D-Turbo-VBI algorithm. This approach reduces both upstream and downstream communication overhead during federated training and computational complexity during local inference. The paper also establishes the convergence property of the proposed algorithm. Simulation results demonstrate significant gains over baselines in reducing communication overhead and computational complexity.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about a new way to make models work better together on different devices, without sending too much information between them. They use special math called Bayesian model compression to make the models smaller and faster. The authors create a new method that works like a “turbo” to help the devices talk less and compute quicker. This helps with training big models in a way that is more efficient and accurate.

Keywords

» Artificial intelligence  » Bayesian inference  » Federated learning  » Inference  » Model compression