Summary of Smart: Submodular Data Mixture Strategy For Instruction Tuning, by H S V N S Kowndinya Renduchintala et al.
SMART: Submodular Data Mixture Strategy for Instruction Tuning
by H S V N S Kowndinya Renduchintala, Sumit Bhatia, Ganesh Ramakrishnan
First submitted to arxiv on: 13 Mar 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper introduces SMART, a novel data mixture strategy for instruction tuning in language models. It proposes a submodular function to assign importance scores to tasks and redistribute a fine-tuning budget among tasks and select non-redundant samples from each task. The authors demonstrate that SMART outperforms traditional methods like examples proportional mixing and equal mixing, and facilitates the creation of data mixtures based on representative subsets of tasks. Task pruning analysis reveals that allocating budget among a subset of representative tasks yields superior performance in limited-budget settings. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper is about improving how we fine-tune language models to do better on new tasks. Right now, it’s hard to find the right balance between different tasks. The authors create a new way called SMART that helps figure out which tasks are most important and assigns more training time to those ones. They show that this method works better than other methods and allows us to make good decisions with limited training data. |
Keywords
* Artificial intelligence * Fine tuning * Instruction tuning * Pruning