Summary of Distributed In-context Learning Under Non-iid Among Clients, by Siqi Liang et al.
Distributed In-Context Learning under Non-IID Among Clients
by Siqi Liang, Sumyeong Ahn, Jiayu Zhou
First submitted to arxiv on: 31 Jul 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper investigates the application of in-context learning (ICL) in distributed settings, where clients have non-identical independent distributions (non-IID). Existing ICL methods are typically centralized and retrieve in-context examples (ICEs) from a single training dataset. However, in real-world scenarios, data may be distributed among multiple clients, and remote data retrieval can be costly. The authors show that test queries have different preferences for clients due to non-IIDness, leading to suboptimal performance when using equal contributions. To address this challenge, the paper proposes a novel approach that allocates a budget for each client based on the preference of each query. This framework is evaluated on diverse datasets and demonstrates superior performance compared to competing baselines. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper looks at how we can use large language models (LLMs) in situations where data is not all in one place. Right now, most LLMs need a big central dataset to work well. But what if the data is spread out among many devices or computers? That’s exactly the situation this paper explores. The authors show that when there are many different sources of data, it’s hard to use ICL because each source has its own way of doing things (non-IID). They then propose a new way to tackle this problem by giving each source the right amount of “budget” to contribute to the overall result. This approach is tested on various datasets and performs better than other methods. |