Loading Now

Summary of Shortened Llama: Depth Pruning For Large Language Models with Comparison Of Retraining Methods, by Bo-kyeong Kim et al.


Shortened LLaMA: Depth Pruning for Large Language Models with Comparison of Retraining Methods

by Bo-Kyeong Kim, Geonmin Kim, Tae-Ho Kim, Thibault Castells, Shinkook Choi, Junho Shin, Hyoung-Kyu Song

First submitted to arxiv on: 5 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computation and Language (cs.CL)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This research paper presents a novel approach to reducing the computational requirements of large language models (LLMs) by employing structured pruning techniques. The study focuses on comparing the effectiveness of width and depth pruning methods in compressing LLMs while maintaining their performance. The authors demonstrate that simple depth pruning can achieve comparable or superior results to recent width pruning studies, particularly under memory-constrained conditions. Additionally, they show that retraining pruned models using continued pretraining on a large corpus outperforms LoRA-based tuning at severe pruning ratios.
Low GrooveSquid.com (original content) Low Difficulty Summary
Large language models are amazing tools that help us understand and generate human-like text. However, they require a lot of computational power to run. To make them more efficient, researchers have been exploring ways to “prune” or remove parts of the model without sacrificing its ability to perform well. This paper looks at two different pruning methods: width pruning, which reduces the size of some important connections between layers; and depth pruning, which removes entire layers. The study shows that a simple depth pruning method can be just as good as more complex width pruning approaches in making the model faster and more efficient.

Keywords

* Artificial intelligence  * Lora  * Pretraining  * Pruning