Loading Now

Summary of Learning with Less: Knowledge Distillation From Large Language Models Via Unlabeled Data, by Juanhui Li et al.


Learning with Less: Knowledge Distillation from Large Language Models via Unlabeled Data

by Juanhui Li, Sreyashi Nag, Hui Liu, Xianfeng Tang, Sheikh Sarwar, Limeng Cui, Hansu Gu, Suhang Wang, Qi He, Jiliang Tang

First submitted to arxiv on: 12 Nov 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed paper, titled “Learning with Less: Knowledge Distillation from Large Language Models for Efficient Training,” tackles the issue of large language models (LLMs) being limited in real-world NLP applications due to their computational demands. To address this limitation, smaller models are typically used for deployment, but training these models is hindered by the scarcity of labeled data. The authors propose a novel approach called LLKD (Learning with Less computational resources and less data for Knowledge Distillation from LLMs) that leverages unlabeled data to generate pseudo-labels for training smaller models. This method prioritizes samples where the teacher demonstrates high confidence in its labeling, indicating reliable labels, and where the student exhibits a high information need, identifying challenging samples that require further learning. The authors conduct comprehensive experiments across various datasets, demonstrating superior performance with higher data efficiency.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper “Learning with Less: Knowledge Distillation from Large Language Models for Efficient Training” is about making language models work better with less computer power and less data. Right now, big language models are very good at understanding text, but they use a lot of computer power to do it. Smaller models can be used instead, but they need more labeled data to learn. The authors came up with an idea to use the big model to help train the small model using unlabeled data. This way, the small model can learn from the big model and not require as much data or computer power. The authors tested their idea on different datasets and showed that it works well.

Keywords

» Artificial intelligence  » Knowledge distillation  » Nlp