Loading Now

Summary of Edge Intelligence Optimization For Large Language Model Inference with Batching and Quantization, by Xinyuan Zhang et al.


Edge Intelligence Optimization for Large Language Model Inference with Batching and Quantization

by Xinyuan Zhang, Jiang Liu, Zehui Xiong, Yudong Huang, Gaochang Xie, Ran Zhang

First submitted to arxiv on: 12 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Networking and Internet Architecture (cs.NI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper addresses the challenge of deploying Large Language Models (LLMs) on edge devices, overcoming limitations such as resource demands, privacy concerns, latency issues, and usage restrictions. The authors propose an optimization problem tailored for LLM inference, leveraging batching technique, model quantization, batch scheduling, and joint allocation of communication and computation resources to maximize inference throughput while considering edge device constraints and varying user requirements. To solve this NP-hard problem, the paper develops a novel Depth-First Tree-Searching algorithm with online tree-pruning (DFTSP) that operates within a feasible time complexity. The proposed approach is evaluated through simulation results, showcasing its superiority over other batching benchmarks in terms of throughput across diverse user settings and quantization techniques.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps solve a big problem: how to make Large Language Models work on devices like smartphones or tablets instead of relying on cloud computing. Right now, these models are too resource-intensive for regular devices, which means they can’t be used in real-time. The researchers came up with a solution that makes the model work better on edge devices by breaking it down into smaller chunks and optimizing how it uses resources. They also created an algorithm to make sure the model works efficiently without sacrificing performance. This could lead to more AI-powered features being available offline, making our lives easier and more convenient.

Keywords

» Artificial intelligence  » Inference  » Optimization  » Pruning  » Quantization