Loading Now

Summary of Diprompt: Disentangled Prompt Tuning For Multiple Latent Domain Generalization in Federated Learning, by Sikai Bai et al.


DiPrompT: Disentangled Prompt Tuning for Multiple Latent Domain Generalization in Federated Learning

by Sikai Bai, Jie Zhang, Shuaicheng Li, Song Guo, Jingcai Guo, Jun Hou, Tao Han, Xiaocheng Lu

First submitted to arxiv on: 11 Mar 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Federated learning (FL) has emerged as a powerful paradigm for learning from decentralized data. However, most existing FL methods assume that domain labels are provided during training, which may be impractical in real-world scenarios due to underutilization of edge devices and additional cross-client domain annotations. To overcome this limitation, we propose an efficient approach called Disentangled Prompt Tuning (DiPrompT) for federated domain generalization without explicit constraints on the number of domains or clients. DiPrompT learns adaptive prompts that eliminate the one-to-one mapping between source domains and local clients. It also introduces a dynamic query metric to automatically search suitable domain labels for each sample, without labor-intensive annotation. Our approach achieves superior performance in domain generalization compared to state-of-the-art FL methods when domain labels are not provided.
Low GrooveSquid.com (original content) Low Difficulty Summary
Federated learning is a way to learn from data collected on many devices or computers. Most current approaches require specific information about each device, which may not be practical. Our new approach called Disentangled Prompt Tuning (DiPrompT) helps devices work together without needing this extra information. It uses prompts that help devices adapt to different situations and automatically find the right answers without requiring human annotation. This approach is better than existing methods in learning from data when specific information about each device is not available.

Keywords

* Artificial intelligence  * Domain generalization  * Federated learning  * Prompt