Summary of Fast Training Dataset Attribution Via In-context Learning, by Milad Fotouhi et al.
Fast Training Dataset Attribution via In-Context Learning
by Milad Fotouhi, Mohammad Taha Bahadori, Oluwaseyi Feyisetan, Payman Arabshahi, David Heckerman
First submitted to arxiv on: 14 Aug 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper explores how large language models (LLMs) can better estimate the impact of training data on their outputs by leveraging in-context learning and prompt engineering. It proposes two innovative methods: one that measures differences between LLM outputs with or without context, and another that frames contribution scoring as a matrix factorization problem. The comparison shows that the latter approach is more resilient to noise in in-context learning, providing a more reliable estimate of data contributions. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper studies how big language models can improve their understanding of where training data comes from by using two new techniques: comparing outputs with and without context, and treating contribution scoring as a puzzle. The results show that the second method is better at handling noise in learning from context, giving us a more accurate picture of what each piece of data contributes. |
Keywords
* Artificial intelligence * Prompt