Summary of A Fairness-driven Method For Learning Human-compatible Negotiation Strategies, by Ryan Shea and Zhou Yu
A Fairness-Driven Method for Learning Human-Compatible Negotiation Strategies
by Ryan Shea, Zhou Yu
First submitted to arxiv on: 26 Sep 2024
Categories
- Main: Artificial Intelligence (cs.AI)
- Secondary: Computation and Language (cs.CL)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Medium Difficulty summary: Despite recent progress in AI and NLP, negotiation remains a challenging domain for AI agents. Traditional game-theoretic approaches struggle with human-compatible strategies due to their inability to learn from human data. In contrast, methods solely relying on human data are often domain-specific and lack the theoretical guarantees provided by game theory-based strategies. To address this, we propose the Fairness-Driven Human-Compatible (FDHC) negotiation framework, which integrates fairness into both reward design and search to learn human-compatible negotiation strategies. Our method incorporates a novel RL+search technique called LGM-Zero, which utilizes a pre-trained language model to retrieve human-compatible offers from large action spaces. We demonstrate that our approach achieves more egalitarian negotiation outcomes and improves negotiation quality. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Low Difficulty summary: This paper talks about how AI agents are not very good at negotiating with humans. Right now, there are two main ways to make AI agents negotiate: using game theory or using human data. However, both of these methods have problems. Game theory is too rigid and doesn’t account for what humans do, while using human data only works in specific situations. So, we came up with a new approach that combines fairness and search to teach AI agents how to negotiate like humans. Our method uses a special kind of machine learning called LGM-Zero, which helps the AI find good offers by looking at lots of other options. We found that our approach leads to more fair and better negotiations. |
Keywords
» Artificial intelligence » Language model » Machine learning » Nlp