Loading Now

Summary of Mcubert: Memory-efficient Bert Inference on Commodity Microcontrollers, by Zebin Yang et al.


MCUBERT: Memory-Efficient BERT Inference on Commodity Microcontrollers

by Zebin Yang, Renze Chen, Taiqiang Wu, Ngai Wong, Yun Liang, Runsheng Wang, Ru Huang, Meng Li

First submitted to arxiv on: 23 Oct 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes MCUBERT, a novel approach to enable language models like BERT to run efficiently on tiny microcontroller units (MCUs) through network and scheduling co-optimization. The authors identify the embedding table as a major storage bottleneck for tiny BERT models and develop an MCU-aware two-stage neural architecture search algorithm based on clustered low-rank approximation for embedding compression. Additionally, they propose a novel fine-grained MCU-friendly scheduling strategy to reduce inference memory requirements. The proposed approach achieves significant reductions in parameter size and execution memory, allowing for processing of more than 512 tokens with less than 256KB of memory, while maintaining latency and accuracy.
Low GrooveSquid.com (original content) Low Difficulty Summary
MCUBERT is a new way to make BERT work on tiny computers called microcontrollers. These computers are very small and can’t handle big language models like BERT. The researchers figured out that the problem was the “embedding table” which takes up too much space. They came up with a new way to compress this table, making it smaller and more efficient. They also developed a way to schedule tasks on these tiny computers so that they use less memory. With MCUBERT, BERT can now run on these small computers and process longer text sequences without slowing down or losing accuracy.

Keywords

» Artificial intelligence  » Bert  » Embedding  » Inference  » Optimization