Loading Now

Summary of Flocora: Federated Learning Compression with Low-rank Adaptation, by Lucas Grativol Ribeiro et al.


FLoCoRA: Federated learning compression with low-rank adaptation

by Lucas Grativol Ribeiro, Mathieu Leonardon, Guillaume Muller, Virginie Fresse, Matthieu Arzel

First submitted to arxiv on: 20 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Signal Processing (eess.SP)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper applies Low-Rank Adaptation (LoRA) methods to train small-vision models in Federated Learning (FL) from scratch. Specifically, the authors propose FLoCoRA, an aggregation-agnostic method that integrates LoRA within FL, achieving a 4.8-fold reduction in communication costs while maintaining less than 1% accuracy degradation for CIFAR-10 classification with a ResNet-8 model. The same approach can be extended using affine quantization, resulting in an 18.6-fold decrease in communication cost and comparable accuracy to the standard method on a ResNet-18 model. This formulation serves as a strong baseline for message size reduction, outperforming conventional model compression works while reducing training memory requirements due to low-rank adaptation.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper shows how to make computers work better together by sharing small pieces of information. They call this “Federated Learning” (FL). The main idea is to use a special technique called “Low-Rank Adaptation” (LoRA) to help FL work faster and more efficiently. The authors tested their method on some pictures and found that it reduced the amount of data needed to share by 4-18 times, while still getting very accurate results. This is important because it makes it easier for computers to work together without sharing too much information.

Keywords

» Artificial intelligence  » Classification  » Federated learning  » Lora  » Low rank adaptation  » Model compression  » Quantization  » Resnet