Summary of Secokd: Aligning Large Language Models For In-context Learning with Fewer Shots, by Weixing Wang et al.
SeCoKD: Aligning Large Language Models for In-Context Learning with Fewer Shots
by Weixing Wang, Haojin Yang, Christoph Meinel
First submitted to arxiv on: 20 Jun 2024
Categories
- Main: Artificial Intelligence (cs.AI)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed framework, SeCoKD, aims to reduce the number of demonstrations required for Large Language Models (LLMs) to learn from in-context learning. The method leverages self-Knowledge Distillation training to align the student model with a heavily prompted variation, increasing the utilization of single demonstrations. In this study, SeCoKD is experimented on three LLMs and six benchmarks, primarily focusing on reasoning tasks. Results show that SeCoKD outperforms base models and Supervised Fine-tuning in zero-shot and one-shot settings by 30% and 10%, respectively, with little negative artifacts observed when evaluated on new tasks. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary SeCoKD is a new way to help Large Language Models learn from just one or two examples. This is important because usually, we need many examples for the model to understand what it should do. The team tested SeCoKD on different language models and tasks, and found that it works really well in zero-shot and one-shot settings. This means that with less training data, the model can still make good predictions. |
Keywords
» Artificial intelligence » Fine tuning » Knowledge distillation » One shot » Student model » Supervised » Zero shot