Loading Now

Summary of Improving Lora in Privacy-preserving Federated Learning, by Youbang Sun et al.


Improving LoRA in Privacy-preserving Federated Learning

by Youbang Sun, Zitao Li, Yaliang Li, Bolin Ding

First submitted to arxiv on: 18 Mar 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Cryptography and Security (cs.CR); Distributed, Parallel, and Cluster Computing (cs.DC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
In this paper, researchers tackle the challenge of applying a popular parameter-efficient fine-tuning method called Low-rank adaptation (LoRA) to pre-trained language models in a privacy-preserving federated learning setting. LoRA is known for its good performance and computational efficiency, but it becomes unstable when applied in FL due to factors like data heterogeneity, multi-step local updates, and additive noise enforced for differential privacy. The proposed solution, Federated Freeze A LoRA (FFA-LoRA), aims to alleviate these challenges by fixing non-zero matrices and fine-tuning zero-initialized matrices. FFA-LoRA demonstrates better performance and computational efficiency compared to vanilla LoRA in various FL tasks.
Low GrooveSquid.com (original content) Low Difficulty Summary
In this paper, scientists work on making a popular way of training language models more stable when it’s used in a special kind of learning called federated learning. Federated learning lets many devices learn together without sharing their data. The problem is that the method they’re using, called Low-rank adaptation (LoRA), gets unstable when it’s used in this setting. To fix this, the researchers created a new version of LoRA that only changes certain parts of the model, keeping other parts the same. This new version, Federated Freeze A LoRA (FFA-LoRA), works better than the original LoRA and uses less computer power.

Keywords

* Artificial intelligence  * Federated learning  * Fine tuning  * Lora  * Low rank adaptation  * Parameter efficient