Summary of Token-based Decision Criteria Are Suboptimal in In-context Learning, by Hakaze Cho et al.
Token-based Decision Criteria Are Suboptimal in In-context Learning
by Hakaze Cho, Yoshihiro Sakai, Mariko Kato, Kenshiro Tanaka, Akira Ishii, Naoya Inoue
First submitted to arxiv on: 24 Jun 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed Hidden Calibration method redefines the traditional token-based classification criteria used in In-Context Learning (ICL), which is shown to improve performance by 20%~50% on a range of models and datasets. By abandoning token probabilities and instead using the nearest centroid classifier on the last hidden states, Hidden Calibration achieves state-of-the-art results in ICL. The approach demonstrates better classification boundaries with less inter-class overlap and reveals that language models (LMs) can provide linearly separable intra-class clusters when given demonstrations, supporting the principle of ICL. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary In-Context Learning is a way to help computers learn new skills by using examples from existing data. Usually, this process uses certain rules to decide what category an example belongs to. However, these rules can be tricky and don’t always work well. To solve this problem, researchers have created a new method called Hidden Calibration. Instead of using old rules, it looks at the last hidden states in a computer’s language model and chooses the category that is closest to those states. This new approach has been tested on many different models and datasets and has shown significant improvements, achieving state-of-the-art results. |
Keywords
* Artificial intelligence * Classification * Language model * Token