Loading Now

Summary of Teaching Models to Balance Resisting and Accepting Persuasion, by Elias Stengel-eskin et al.


Teaching Models to Balance Resisting and Accepting Persuasion

by Elias Stengel-Eskin, Peter Hase, Mohit Bansal

First submitted to arxiv on: 18 Oct 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes a new approach called Persuasion-Training (PBT) to defend large language models against persuasion while also enabling them to accept beneficial persuasion. The authors show that optimizing models for only one side of persuasion leads to poor performance on the other. PBT leverages multi-agent recursive dialogue trees to create data and trains models via preference optimization to accept persuasion when appropriate. This approach allows for training larger models using data generated from dialogues between smaller models. PBT improves resistance to misinformation, resilience to challenges, and overall performance on holistic data containing both positive and negative persuasion. Additionally, PBT leads to better team performance in multi-agent debates across two domains.
Low GrooveSquid.com (original content) Low Difficulty Summary
Large language models are vulnerable to being persuaded or swayed by others. Researchers have developed a new approach called Persuasion-Training (PBT) that helps defend these models against negative persuasion while also allowing them to improve their answers through positive persuasion. PBT uses special training data and algorithms to teach models when it’s okay to be convinced by someone else. This can help prevent the spread of misinformation and make AI more trustworthy.

Keywords

» Artificial intelligence  » Optimization