Loading Now

Summary of Cegi: Measuring the Trade-off Between Efficiency and Carbon Emissions For Slms and Vlms, by Abhas Kumar et al.


CEGI: Measuring the trade-off between efficiency and carbon emissions for SLMs and VLMs

by Abhas Kumar, Kapil Pathak, Rajesh Kavuru, Prabhakar Srinivasan

First submitted to arxiv on: 3 Dec 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Computers and Society (cs.CY); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Small Language Models (SLMs) and Vision Language Models (VLMs) are analyzed in this paper to evaluate the trade-off between model performance and carbon emissions across four essential tasks: Image Captioning, Visual Question Answering (VQA), Dialogue Summarization, and Text-to-SQL conversion. Various SLMs and VLMs from the Qwen and LLaMA architecture family are chosen, and their variants based on model size, quantization level, and fine-tuning parameters are evaluated. The paper introduces a novel metric called CEGI (Carbon Efficient Gain Index) to quantify the trade-off between model performance and carbon emissions. This study demonstrates that fine-tuning SLMs and VLMs can achieve performance levels comparable to Large Language Models (LLMs) while producing significantly less carbon emissions. The findings suggest that the marginal gains in accuracy from larger models do not justify the substantial increase in carbon emissions.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper looks at how well Small Language Models (SLMs) and Vision Language Models (VLMs) work and how they affect the environment. It tests different types of these models on four important tasks, like describing pictures or answering questions about what’s in a picture. The study introduces a new way to measure how well a model works compared to how much it harms the environment. The results show that making smaller changes to SLMs and VLMs can help them work just as well as bigger models while being kinder to the planet.

Keywords

» Artificial intelligence  » Fine tuning  » Image captioning  » Llama  » Quantization  » Question answering  » Summarization