Loading Now

Summary of A Survey Of Resource-efficient Llm and Multimodal Foundation Models, by Mengwei Xu et al.


A Survey of Resource-efficient LLM and Multimodal Foundation Models

by Mengwei Xu, Wangsong Yin, Dongqi Cai, Rongjie Yi, Daliang Xu, Qipeng Wang, Bingyang Wu, Yihao Zhao, Chen Yang, Shihe Wang, Qiyang Zhang, Zhenyan Lu, Li Zhang, Shangguang Wang, Yuanchun Li, Yunxin Liu, Xin Jin, Xuanzhe Liu

First submitted to arxiv on: 16 Jan 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Distributed, Parallel, and Cluster Computing (cs.DC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Large foundation models, including language models and vision transformers, have transformed the machine learning landscape. While they offer impressive versatility and performance, their training requires significant hardware resources, posing environmental concerns. To address these issues, researchers are developing resource-efficient strategies. This survey examines both algorithmic and systemic aspects of large foundation model research, analyzing cutting-edge architectures, training/serving algorithms, system designs, and implementations. The goal is to understand the current approaches tackling resource challenges and potentially inspire future breakthroughs.
Low GrooveSquid.com (original content) Low Difficulty Summary
Large models like language models and vision transformers are changing machine learning. They’re very good at many things, but they need a lot of computer power to work well. This makes them use up a lot of energy, which isn’t great for the planet. To make large models more sustainable, researchers are working on ways to make them use less energy. This report looks at how people are trying to solve this problem, and what ideas might help.

Keywords

* Artificial intelligence  * Machine learning