Loading Now

Summary of Fusegpt: Learnable Layers Fusion Of Generative Pre-trained Transformers, by Zehua Pei et al.


FuseGPT: Learnable Layers Fusion of Generative Pre-trained Transformers

by Zehua Pei, Hui-Ling Zhen, Xianzhi Yu, Sinno Jialin Pan, Mingxuan Yuan, Bei Yu

First submitted to arxiv on: 21 Nov 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes FuseGPT, a novel methodology to recover model performance after pruning transformer blocks in Generative Pre-trained Transformers (GPTs). The authors introduce Macro Influence (MI), an importance detection metric that calculates the loss of information after removing each block. They then propose group-level layers fusion, which injects parameters from unimportant blocks into neighboring blocks through iterative fine-tuning with learnable rank decomposition matrices. This approach outperforms previous works in both perplexity and zero-shot task performance using modest amounts of data.
Low GrooveSquid.com (original content) Low Difficulty Summary
FuseGPT is a new way to make big language models even better. Right now, we’re stuck with just throwing away parts of the model that don’t matter much. But what if we could use those parts again? That’s what FuseGPT does. It figures out which parts are most important and then uses them to help other parts of the model. This helps the model get even better results on big tasks like language translation or text summarization.

Keywords

» Artificial intelligence  » Fine tuning  » Perplexity  » Pruning  » Summarization  » Transformer  » Translation  » Zero shot