Loading Now

Summary of Dropbp: Accelerating Fine-tuning Of Large Language Models by Dropping Backward Propagation, By Sunghyeon Woo et al.


DropBP: Accelerating Fine-Tuning of Large Language Models by Dropping Backward Propagation

by Sunghyeon Woo, Baeseong Park, Byeongwook Kim, Minjung Jo, Se Jung Kwon, Dongsuk Jeon, Dongsoo Lee

First submitted to arxiv on: 27 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computation and Language (cs.CL)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed Dropping Backward Propagation (DropBP) approach reduces computational costs and activation memory while maintaining accuracy for training large language models. The novel method randomly drops layers during backward propagation, equivalent to training shallow submodules generated by undropped layers and residual connections. DropBP calculates layer sensitivity to assign a drop rate, stabilizing the training process. It can be applied to full fine-tuning and integrated with parameter-efficient fine-tuning (PEFT) methods. Compared to the baseline, DropBP reduces training time by 44% with comparable accuracy, accelerates convergence by 1.5x, and enables training with a longer sequence length on a single NVIDIA-A100 GPU. The approach also increases throughput by 79% on an NVIDIA A100 GPU and 117% on an Intel Gaudi2 HPU.
Low GrooveSquid.com (original content) Low Difficulty Summary
DropBP is a new way to train large language models that uses less computer power and memory. This helps make the training process faster and more efficient. The method works by dropping some layers during backward propagation, which is like training smaller models inside bigger ones. DropBP also calculates how important each layer is to the model’s performance, so it can decide when to drop each layer. This makes the training process more stable. DropBP can be used with other methods that already reduce the number of calculations needed for fine-tuning large language models. The results show that DropBP can train a model up to 6.2 times longer than usual in the same amount of time, and it can even do some tasks 117% faster.

Keywords

* Artificial intelligence  * Fine tuning  * Parameter efficient