Loading Now

Summary of Personalized Collaborative Fine-tuning For On-device Large Language Models, by Nicolas Wagner et al.


Personalized Collaborative Fine-Tuning for On-Device Large Language Models

by Nicolas Wagner, Dongyang Fan, Martin Jaggi

First submitted to arxiv on: 15 Apr 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A large language model fine-tuning protocol for on-device learning is proposed, leveraging self-supervised collaboration and limited local data. Three trust-weighted gradient aggregation schemes are introduced: weight similarity-based, prediction similarity-based, and validation performance-based. To reduce communication overhead, Low-Rank Adaptation (LoRA) is integrated, exchanging LoRA weight updates only. The protocols outperform both FedAvg and local fine-tuning methods, especially in scenarios with diverse local data distributions.
Low GrooveSquid.com (original content) Low Difficulty Summary
We’re exploring a new way to improve language models on devices like our phones or tablets when we don’t have much data. We want to use the small amounts of data each device has to make the model better. To do this, we’re proposing three different ways to combine the updates from each device: based on how similar their weights are, how similar their predictions are, and how well they perform. Since sharing a lot of information can be slow and expensive, we’re also using something called Low-Rank Adaptation (LoRA) to only send the most important updates. Our approach works better than other methods in situations where each device has different data.

Keywords

» Artificial intelligence  » Fine tuning  » Large language model  » Lora  » Low rank adaptation  » Self supervised