Loading Now

Summary of Compressing Llms: the Truth Is Rarely Pure and Never Simple, by Ajay Jaiswal et al.


Compressing LLMs: The Truth is Rarely Pure and Never Simple

by Ajay Jaiswal, Zhe Gan, Xianzhi Du, Bowen Zhang, Zhangyang Wang, Yinfei Yang

First submitted to arxiv on: 2 Oct 2023

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This research paper introduces a new evaluation protocol for compressed Large Language Models (LLMs), dubbed LLM-KICK, which redefines the assessment of existing state-of-the-art (SoTA) compression methods. The authors highlight the limitations of current SoTA compression methods, including pruning and quantization techniques, which suffer significant performance degradation at trivial sparsity ratios. They also demonstrate that pruned LLMs can still achieve robust in-context retrieval and summarization capabilities even at 50% or higher sparsity levels. Furthermore, the paper reveals favorable merits and unfortunate plights of current SoTA compression methods, providing insights for the development of better LLM compression techniques.
Low GrooveSquid.com (original content) Low Difficulty Summary
This research aims to improve the way we evaluate compressed Large Language Models (LLMs). Right now, we use a simple test called perplexity, but it’s not very good at telling us how well these models really work. The authors have come up with a new set of tests that are more accurate and can tell us if the model is still working well even when some parts are removed or simplified. They also found that current methods for making LLMs smaller don’t always work as expected, but they do find that some pruned models are still very good at helping us retrieve information from text. The authors hope their new approach will help create better ways to compress and use these powerful language models.

Keywords

* Artificial intelligence  * Perplexity  * Pruning  * Quantization  * Summarization