Summary of Pruning Via Merging: Compressing Llms Via Manifold Alignment Based Layer Merging, by Deyuan Liu et al.
Pruning via Merging: Compressing LLMs via Manifold Alignment Based Layer Merging
by Deyuan Liu, Zhanyue Qin, Hairu Wang, Zhao Yang, Zecheng Wang, Fangying Rong, Qingbin Liu, Yanchao Hao, Xi Chen, Cunhang Fan, Zhao Lv, Zhiying Tu, Dianhui Chu, Bo Li, Dianbo Sui
First submitted to arxiv on: 24 Jun 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes Manifold-Based Knowledge Alignment and Layer Merging Compression (MKA), a novel approach to compress large language models (LLMs) while preserving their performance. Unlike traditional pruning methods, MKA uses manifold learning and the Normalized Pairwise Information Bottleneck (NPIB) measure to merge similar layers, reducing model size and achieving substantial compression ratios. The authors evaluate MKA on multiple benchmark datasets and various LLMs, finding that it not only preserves model performance but also outperforms traditional pruning methods. When combined with quantization, MKA achieves even greater compression, offering a resource-efficient and performance-preserving model compression technique for LLMs. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary MKA is a new way to make large language models smaller without losing their ability to understand and generate text. The authors use special math tools to merge parts of the model that are similar, which helps reduce its size while keeping it good at doing its job. They tested MKA on many different datasets and language models, and found that it works really well. It can even make the model smaller when combined with another technique called quantization. For example, they were able to make a big language model 44% smaller without sacrificing too much performance. |
Keywords
» Artificial intelligence » Alignment » Language model » Manifold learning » Model compression » Pruning » Quantization