Summary of The Unreasonable Ineffectiveness Of the Deeper Layers, by Andrey Gromov et al.
The Unreasonable Ineffectiveness of the Deeper Layers
by Andrey Gromov, Kushal Tirumala, Hassan Shapourian, Paolo Glorioso, Daniel A. Roberts
First submitted to arxiv on: 26 Mar 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Machine Learning (cs.LG); Machine Learning (stat.ML)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper investigates how language models store knowledge in their weights by applying layer pruning to determine which layers are unnecessary for answering specific question-answering tasks. The authors identify the optimal block of layers to prune based on similarity across layers and then fine-tune the model to “heal” the damage. Surprisingly, they find that popular open-weight models can withstand the removal of up to half their layers without significant performance degradation, suggesting either that current pretraining methods don’t effectively utilize deeper layer parameters or that shallow layers play a critical role in storing knowledge. The study uses parameter-efficient finetuning methods, including quantization and Low Rank Adapters (QLoRA), which allows for experiments on a single 40GB A100 GPU. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Researchers studied how language models remember things by removing parts of the model that aren’t necessary. They found that even when they removed half of the model, it still worked pretty well! This means that maybe the way we train these models isn’t using all the important information in the deeper layers, or maybe the simpler layers are doing most of the work to remember things. To test this idea, they used a special way to fine-tune the model, which allowed them to do their experiments on just one powerful computer. |
Keywords
* Artificial intelligence * Parameter efficient * Pretraining * Pruning * Quantization * Question answering