Loading Now

Summary of Energy-latency Manipulation Of Multi-modal Large Language Models Via Verbose Samples, by Kuofeng Gao et al.


Energy-Latency Manipulation of Multi-modal Large Language Models via Verbose Samples

by Kuofeng Gao, Jindong Gu, Yang Bai, Shu-Tao Xia, Philip Torr, Wei Liu, Zhifeng Li

First submitted to arxiv on: 25 Apr 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper investigates the vulnerability of multi-modal large language models (MLLMs) to malicious users who can induce high energy consumption and latency time during inference. The researchers find that maximizing the length of generated sequences can manipulate the energy-latency cost, motivating them to propose verbose samples, including verbose images and videos. To achieve this, they design non-specific losses for image-based and video-based models, as well as modality-specific losses to increase complexity and promote diverse hidden states or frame features. The authors also introduce a temporal weight adjustment algorithm to balance these losses. Experimental results show that the proposed approach can significantly extend the length of generated sequences.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper looks at how large language models can be tricked into using up lots of energy and taking a long time to process information. The researchers want to figure out how this works and make it happen on purpose, so they can create longer responses from these models. They develop special losses for image-based and video-based models that make them produce more complex and diverse results. This helps the models generate longer sequences of text or images without getting stuck. Overall, the goal is to make language models more powerful and flexible.

Keywords

» Artificial intelligence  » Inference  » Multi modal