Loading Now

Summary of Token-efficient Leverage Learning in Large Language Models, by Yuanhao Zeng et al.


Token-Efficient Leverage Learning in Large Language Models

by Yuanhao Zeng, Min Wang, Yihang Wang, Yingxia Shao

First submitted to arxiv on: 1 Apr 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed Token-Efficient Leverage Learning (TELL) methodology demonstrates improved performance on low-resource tasks while reducing data requirements by up to an order of magnitude compared to conventional Supervised Fine-Tuning (SFT). This is achieved across various Large Language Models (LLMs) and tasks, showcasing the effectiveness of Leverage Learning. By leveraging this approach, TELL outperforms SFT in task performance with the same amount of task data. The mechanism of Leverage Learning aligns with the quantization hypothesis, suggesting its potential for further exploration.
Low GrooveSquid.com (original content) Low Difficulty Summary
Leverage Learning is a new way to help Large Language Models work better on small datasets. Currently, these models do great when they have lots of data, but struggle when there’s not much data available. To fix this, we developed Token-Efficient Leverage Learning (TELL), which makes the model work better with less data. We tested TELL on many different tasks and models, and it worked really well, even outperforming other methods that use more data.

Keywords

* Artificial intelligence  * Fine tuning  * Quantization  * Supervised  * Token