Summary of Towards the Effect Of Examples on In-context Learning: a Theoretical Case Study, by Pengfei He et al.
Towards the Effect of Examples on In-Context Learning: A Theoretical Case Study
by Pengfei He, Yingqian Cui, Han Xu, Hui Liu, Makoto Yamada, Jiliang Tang, Yue Xing
First submitted to arxiv on: 12 Oct 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Machine Learning (stat.ML)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary In this paper, researchers explore the mechanisms behind large language models (LLMs) adapting to downstream tasks through in-context learning (ICL). They introduce a probabilistic model to analyze how pre-training knowledge and example-based learning interact in binary classification tasks. The study reveals that when pre-training knowledge contradicts example-based knowledge, ICL prediction relies more on one or the other depending on the number of examples. Label frequency and noise also impact accuracy, with minor classes having lower accuracy and label noise affecting it based on specific levels. Simulations and real-data experiments verify the theoretical results. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary In-context learning helps big language models learn new tasks quickly by using a few example sentences. Scientists don’t fully understand how this works, so they’re studying binary classification tasks to figure out what’s happening. They came up with a math formula that shows how pre-training knowledge and example-based learning mix together. The results show that when the model is trying to learn something new, it might rely more on old knowledge or new examples depending on how many examples there are. The number of correct labels and how noisy they are also affect accuracy. |
Keywords
» Artificial intelligence » Classification » Probabilistic model