Loading Now

Summary of Fedpia — Permuting and Integrating Adapters Leveraging Wasserstein Barycenters For Finetuning Foundation Models in Multi-modal Federated Learning, by Pramit Saha et al.


FedPIA – Permuting and Integrating Adapters leveraging Wasserstein Barycenters for Finetuning Foundation Models in Multi-Modal Federated Learning

by Pramit Saha, Divyanshu Mishra, Felix Wagner, Konstantinos Kamnitsas, J. Alison Noble

First submitted to arxiv on: 19 Dec 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes a novel framework called FedPIA to improve upon naive combinations of federated learning (FL) and parameter-efficient fine-tuning (PEFT) strategies. The framework introduces permutation and integration of local adapters in the server and global adapters in clients, exploiting Wasserstein barycenters for improved blending of client-specific and client-agnostic knowledge. This layer-wise permutation helps to bridge the gap in the parameter space of local and global adapters before integration. The authors conduct over 2000 client-level experiments utilizing 48 medical image datasets across five different medical vision-language FL task settings, demonstrating that FedPIA consistently outperforms state-of-the-art PEFT-FL baselines.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper tries to solve a problem with large vision-language models. These models need big data and computing power to work well, but sometimes we can’t collect or send this data because of privacy rules. One idea is to use special methods like federated learning and parameter-efficient fine-tuning on devices where the data is stored, rather than sending it to a central server. However, these methods have limitations, especially when dealing with different types of data and tasks. The paper proposes a new way to combine these methods called FedPIA, which uses special math to blend together the knowledge from different devices. This helps the models learn better even with limited resources.

Keywords

» Artificial intelligence  » Federated learning  » Fine tuning  » Parameter efficient