Loading Now

Summary of Lorap: Transformer Sub-layers Deserve Differentiated Structured Compression For Large Language Models, by Guangyan Li et al.


LoRAP: Transformer Sub-Layers Deserve Differentiated Structured Compression for Large Language Models

by Guangyan Li, Yongqiang Tang, Wensheng Zhang

First submitted to arxiv on: 15 Apr 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A novel approach to reducing the parameter scale of large language models (LLMs) has been proposed, which combines Low-Rank matrix approximation And structured Pruning (LoRAP). The multi-head self-attention (MHA) sub-layer of Transformer was found to exhibit low-rank structure, while the feed-forward network (FFN) sub-layer did not. A mixed compression model was designed, incorporating input activation weighted singular value decomposition for the MHA sub-layer and a gradient-free structured channel pruning method for the FFN sub-layer. The proposed approach outperformed previous structured compression rivals in zero-shot perplexity and zero-shot task classification experiments.
Low GrooveSquid.com (original content) Low Difficulty Summary
Large language models (LLMs) are really good at doing certain tasks, but they need a lot of memory and computer power to do them. Some researchers wanted to figure out how to make LLMs smaller without losing their abilities. They discovered that one part of the model, called multi-head self-attention, has some special properties that can be used to shrink it. They also found that another part of the model, called feed-forward network, needs to be treated differently. By combining these new ideas with some old techniques, they were able to create a smaller version of the model that still works well.

Keywords

» Artificial intelligence  » Classification  » Perplexity  » Pruning  » Self attention  » Transformer  » Zero shot