Loading Now

Summary of Federated Lora with Sparse Communication, by Kevin Kuo et al.


Federated LoRA with Sparse Communication

by Kevin Kuo, Arian Raje, Kousik Rajesh, Virginia Smith

First submitted to arxiv on: 7 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Distributed, Parallel, and Cluster Computing (cs.DC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper explores methods for improving the efficiency of low-rank adaptation (LoRA) in cross-device federated learning settings. Unlike previous studies that focused on LoRA’s robustness to heterogeneity and privacy, this research aims to reduce communication costs while maintaining performance. The authors find that centralized methods for pruning unstructured neural networks do not translate well to federated settings. Instead, they propose FLASC, a simple approach that applies sparsity to LoRA during communication, allowing clients to locally fine-tune the entire module. Experimental results show that FLASC matches the performance of dense LoRA with up to 10x less communication across four common federated learning tasks. Additionally, this method exhibits benefits in terms of heterogeneity and privacy relative to existing approaches. The paper highlights the importance of considering system-specific constraints when developing communication-efficient finetuning approaches.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about making machine learning more efficient for devices that need to communicate with each other. Right now, it takes a lot of data to train models on these devices, which can be slow and uses up too much energy. The researchers found that methods that work well in one place don’t always work in another. They developed a new method called FLASC that makes the training process more efficient by only sending the necessary information between devices. This approach worked just as well as the original method but used much less data. It also helped improve how well the model performed on different types of data and kept it private from unwanted eyes.

Keywords

» Artificial intelligence  » Federated learning  » Lora  » Low rank adaptation  » Machine learning  » Pruning