Loading Now

Summary of Exploiting Student Parallelism For Efficient Gpu Inference Of Bert-like Models in Online Services, by Weiyan Wang et al.


Exploiting Student Parallelism for Efficient GPU Inference of BERT-like Models in Online Services

by Weiyan Wang, Yilun Jin, Yiming Zhang, Victor Junqiu Wei, Han Tian, Li Chen, Jinbao Xue, Yangyu Tao, Di Wang, Kai Chen

First submitted to arxiv on: 22 Aug 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed system, , aims to address the limitations of large BERT-like models used in text mining and web searching by developing an efficient online inference framework. The system adopts stacking distillation and boosting ensemble methods to distill the original deep model into a group of shallow but virtually stacked student models running in parallel. This allows for lower model depth (e.g., two layers) and reduced inference latency while maintaining accuracy. Additionally, adaptive student pruning is used to dynamically adjust the number of students according to changing online workloads, enabling temporary decreases in student numbers during workload bursts with minimal accuracy loss. The results show that outperforms baselines by 4.1-1.6 times in latency while maintaining accuracy and achieves up to 22.27 times higher throughput for workload bursts.
Low GrooveSquid.com (original content) Low Difficulty Summary
is a new system that makes big BERT-like models more efficient on computers called GPUs. These models are good at finding information, but they take too long to work because they have to do lots of complicated calculations. The system solves this problem by breaking down the big model into smaller ones that can work together quickly and accurately. This means it can find information faster while still being very accurate. It also gets better when it has to handle sudden increases in workload, which is important for things like searching the internet.

Keywords

» Artificial intelligence  » Bert  » Boosting  » Distillation  » Inference  » Pruning