Loading Now

Summary of Federated Llms Fine-tuned with Adaptive Importance-aware Lora, by Yang Su et al.


Federated LLMs Fine-tuned with Adaptive Importance-Aware LoRA

by Yang Su, Na Yan, Yansha Deng

First submitted to arxiv on: 10 Nov 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Distributed, Parallel, and Cluster Computing (cs.DC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a novel framework for federated fine-tuning of pre-trained Large Language Models (LLMs) called Heterogeneous Adaptive Federated Low-Rank Adaptation (LoRA). The framework addresses challenges posed by large model size and client resource heterogeneity. To accommodate these differences, the authors introduce an importance-based parameter truncation scheme and a parameter freezing scheme. Additionally, they propose an adaptive aggregation approach to mitigate information dilution caused by zero-padding aggregation. Experimental results on the 20 News Group classification task demonstrate that their method converges quickly with low communication size and avoids performance degradation.
Low GrooveSquid.com (original content) Low Difficulty Summary
Federated learning is a way for different devices or computers to work together using artificial intelligence. It’s important because it allows them to use AI models without having to share all their data, which helps keep information private. However, making these models work on different devices with varying capabilities can be tricky. This paper proposes a new method called Heterogeneous Adaptive Federated Low-Rank Adaptation (LoRA) that makes this process more efficient and effective. The authors tested their method on a specific task and found it worked well.

Keywords

» Artificial intelligence  » Classification  » Federated learning  » Fine tuning  » Lora  » Low rank adaptation