Loading Now

Summary of Llm-rank: a Graph Theoretical Approach to Pruning Large Language Models, by David Hoffmann et al.


LLM-Rank: A Graph Theoretical Approach to Pruning Large Language Models

by David Hoffmann, Kailash Budhathoki, Matthaeus Kleindessner

First submitted to arxiv on: 17 Oct 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed pruning method, MLPRank, leverages centrality measures from graph theory to optimize the inference of large language models. By reducing computational requirements and memory footprint, this approach aims to make these models more deployable. The authors devise a weighted directed acyclical graph representation of multilayer perceptrons and apply a modified PageRank centrality measure to compute node importance scores. Combining this with uniform pruning leads to structured sparsity. A similar extension is applied to decoder-only transformer models, dubbed LLMRank. Both variants demonstrate strong performance, with MLPRank retaining 6.09% higher accuracy on average compared to three popular baselines and LLMRank achieving 13.42% better retention than two popular baselines.
Low GrooveSquid.com (original content) Low Difficulty Summary
Large language models are getting better at understanding us, but they’re also getting bigger and more expensive to use. To solve this problem, researchers have developed a new way to make these models smaller and faster without losing their ability to understand things. They did this by creating a special kind of map that shows which parts of the model are most important. Then, they used this map to decide which parts to keep and which parts to get rid of. This new method is called MLPRank. The researchers also tested it on a type of model called a transformer, and it worked even better there! They’re making their code available so that others can use this technique too.

Keywords

» Artificial intelligence  » Decoder  » Inference  » Pruning  » Transformer