Loading Now

Summary of Target-driven Attack For Large Language Models, by Chong Zhang et al.


Target-driven Attack for Large Language Models

by Chong Zhang, Mingyu Jin, Dong Shu, Taowen Wang, Dongfang Liu, Xiaobo Jin

First submitted to arxiv on: 9 Nov 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper addresses a critical issue in large language models (LLMs), which are widely used for natural language tasks. Users can easily manipulate LLMs to produce incorrect answers by injecting adversarial text or instructions. While there is significant research on black-box attacks, the relationship between these strategies and their effectiveness is unclear. The authors propose a target-driven black-box attack method that maximizes the KL divergence between clean and attack texts. This approach transforms two convex optimization problems into a single problem to solve for the attack text. Experimental results demonstrate the effectiveness of this method on multiple LLMs and datasets, including token manipulation and misinformation attacks.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper talks about a problem with language models that can be used for things like chatbots and language translation. Users can trick these models into giving wrong answers by adding special words or instructions. Right now, people are using random tricks to try to make the models give up, but it’s not clear if this works well. The authors of this paper came up with a new way to attack the model that is more focused and successful. They tested their method on several language models and datasets and showed that it can be very effective.

Keywords

» Artificial intelligence  » Optimization  » Token  » Translation