Loading Now

Summary of Relaxed Recursive Transformers: Effective Parameter Sharing with Layer-wise Lora, by Sangmin Bae et al.


Relaxed Recursive Transformers: Effective Parameter Sharing with Layer-wise LoRA

by Sangmin Bae, Adam Fisch, Hrayr Harutyunyan, Ziwei Ji, Seungyeon Kim, Tal Schuster

First submitted to arxiv on: 28 Oct 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper explores ways to reduce the size and cost of large language models (LLMs) without sacrificing performance. It focuses on “layer tying” as a form of parameter sharing in Transformers, which has been shown to be limited in modern LLMs. The authors introduce novel methods for converting existing LLMs into smaller “Recursive Transformers” that share parameters across layers with minimal loss of performance. They also propose Relaxed Recursive Transformers with LoRA modules to improve performance while preserving compactness. The results show that recursive models outperform similar-sized vanilla pretrained models and knowledge distillation baselines, and can even recover most of the performance of the original “full-size” model. Additionally, the paper proposes a new inference paradigm called Continuous Depth-wise Batching, which has the potential to lead to significant gains in inference throughput.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper is about making big language models smaller and cheaper without losing their ability to understand language. It uses an idea called “layer tying” to make these models more compact. The authors show that this can work well for existing models, and they even propose new ways to make it work better. They compare their results to other methods and find that the new approach performs just as well or even better. Finally, they suggest a new way of using these smaller models to speed up how fast they can answer questions.

Keywords

» Artificial intelligence  » Inference  » Knowledge distillation  » Lora