Loading Now

Summary of Evaluating and Enhancing Llms Agent Based on Theory Of Mind in Guandan: a Multi-player Cooperative Game Under Imperfect Information, by Yauwai Yim et al.


Evaluating and Enhancing LLMs Agent based on Theory of Mind in Guandan: A Multi-Player Cooperative Game under Imperfect Information

by Yauwai Yim, Chunkit Chan, Tianyu Shi, Zheye Deng, Wei Fan, Tianshi Zheng, Yangqiu Song

First submitted to arxiv on: 5 Aug 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper explores the potential of large language models (LLMs) in facilitating practical collaboration against other agents in complex, imperfect information environments, specifically in non-English settings. By comparing LLMs’ performance with established baselines using other types of agents, this study investigates the applicability of knowledge acquired by open-source and API-based LLMs to sophisticated text-based games requiring agent collaboration under imperfect information. The proposed Theory of Mind (ToM) planning technique enables LLM agents to adapt their strategy against various adversaries using only game rules, current state, and historical context as input. An external tool was incorporated to mitigate the challenge of dynamic and extensive action spaces in this card game. The results show that although a performance gap exists between current LLMs and state-of-the-art reinforcement learning (RL) models, LLMs demonstrate ToM capabilities in this game setting. They consistently improve their performance against opposing agents, suggesting their ability to understand the actions of allies and adversaries and establish collaboration with allies.
Low GrooveSquid.com (original content) Low Difficulty Summary
This study looks at how large language models can work together with other agents to achieve a common goal. It’s like trying to figure out what someone else is thinking or planning, which is called “theory of mind”. The researchers used a card game to test their idea and found that the LLMs were able to understand what the other players were doing and work together to win. This is important because it could help us create more intelligent computers in the future.

Keywords

» Artificial intelligence  » Reinforcement learning