Loading Now

Summary of Basis Selection: Low-rank Decomposition Of Pretrained Large Language Models For Target Applications, by Yang Li et al.


Basis Selection: Low-Rank Decomposition of Pretrained Large Language Models for Target Applications

by Yang Li, Changsheng Zhao, Hyungtak Lee, Ernie Chang, Yangyang Shi, Vikas Chandra

First submitted to arxiv on: 24 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Hardware Architecture (cs.AR); Computation and Language (cs.CL)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed approach introduces a low-rank decomposition method for compressing large language models (LLMs) tailored to specific application requirements. By identifying and removing redundant components, the method effectively retains only necessary elements for target applications. The approach represents LLM weight matrices as linear combinations of base components, pruning irrelevant bases and enhancing the model with new beneficial ones. Results on Llama 2-7b and -13B models show significant size reduction while maintaining accuracy comparable to state-of-the-art techniques.
Low GrooveSquid.com (original content) Low Difficulty Summary
Large language models can make many tasks better, but they use a lot of energy and take up lots of space. This makes it hard to use them on devices like computers or phones. To fix this, researchers developed a way to shrink these models without losing their ability to do tasks well. They did this by finding parts that aren’t necessary for specific jobs and removing those parts. The method works by breaking down the model’s weights into smaller pieces, getting rid of unimportant ones, and adding new helpful ones. This approach was tested on two types of large language models and showed that it can make them much smaller while still keeping them accurate.

Keywords

» Artificial intelligence  » Llama  » Pruning