Loading Now

Summary of Slicegpt: Compress Large Language Models by Deleting Rows and Columns, By Saleh Ashkboos et al.


SliceGPT: Compress Large Language Models by Deleting Rows and Columns

by Saleh Ashkboos, Maximilian L. Croci, Marcelo Gennari do Nascimento, Torsten Hoefler, James Hensman

First submitted to arxiv on: 26 Jan 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computation and Language (cs.CL)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper presents SliceGPT, a novel post-training sparsification scheme for large language models. Existing techniques require additional data structures and offer limited speedup with current hardware. SliceGPT replaces each weight matrix with a smaller, dense matrix, reducing the embedding dimension of the network. The authors demonstrate that SliceGPT can remove up to 25% of model parameters (including embeddings) while maintaining impressive zero-shot task performance for LLAMA2-70B, OPT 66B, and Phi-2 models. Notably, the sliced models run on fewer GPUs and are faster without any additional code optimization. The authors also uncover a new insight, computational invariance in transformer networks, which enables SliceGPT and may inspire future avenues to reduce memory and computation demands for pre-trained models.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper shows how to make large language models use fewer computer resources. Currently, these models take up a lot of space and time on computers, but this new method called SliceGPT can help with that. It works by making the model smaller while keeping its performance the same or very close. The authors tested it on three different types of models and showed that it can reduce the computer power needed to run them without losing any accuracy. This is important because it could make these powerful language models more accessible for everyone.

Keywords

* Artificial intelligence  * Embedding  * Optimization  * Transformer  * Zero shot