Loading Now

Summary of Conveyor: Efficient Tool-aware Llm Serving with Tool Partial Execution, by Yechen Xu et al.


Conveyor: Efficient Tool-aware LLM Serving with Tool Partial Execution

by Yechen Xu, Xinhao Kong, Tingjun Chen, Danyang Zhuo

First submitted to arxiv on: 29 May 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Distributed, Parallel, and Cluster Computing (cs.DC); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes Conveyor, an efficient large language model (LLM) serving system optimized for handling requests involving external tools like ChatGPT plugins. The authors identify the opportunity for LLMs to partially execute tool invocations alongside decoding, which can improve request completion latency by up to 38.8%. They design a novel interface for tool developers to expose partial execution opportunities and a request scheduler that facilitates partial tool execution. This system aims to address the increased complexity of LLM serving workloads due to integration with external tools.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about making it faster and more efficient to use large language models like those used in chatbots or virtual assistants. Right now, these systems have to do a lot of extra work when you ask them to do something that requires another tool, like generating an image or summarizing text. The authors of this paper came up with a new way to make things more efficient by letting the language model and the other tool work together at the same time. This can make requests complete up to 38.8% faster!

Keywords

» Artificial intelligence  » Language model  » Large language model