Summary of Limits Of Theory Of Mind Modelling in Dialogue-based Collaborative Plan Acquisition, by Matteo Bortoletto et al.
Limits of Theory of Mind Modelling in Dialogue-Based Collaborative Plan Acquisition
by Matteo Bortoletto, Constantin Ruhdorfer, Adnen Abdessaied, Lei Shi, Andreas Bulling
First submitted to arxiv on: 21 May 2024
Categories
- Main: Artificial Intelligence (cs.AI)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Recent research on dialogue-based collaborative plan acquisition (CPA) has explored the potential benefits of Theory of Mind (ToM) modelling in settings with asymmetric skill-sets and knowledge. However, the actual impact of ToM on this novel task remains under-explored. This paper sheds light on this topic by representing plans as graphs and exploiting task-specific constraints. The results show that when predicting one’s own missing knowledge, performance nearly doubles, but improvements due to ToM modelling diminish. This phenomenon persists even when evaluating existing baseline methods. A principled performance comparison of models with and without ToM features reveals that learned ToM features are more likely to reflect latent patterns in the data with no perceivable link to ToM. These findings call for a deeper understanding of the role of ToM in CPA and beyond, as well as new methods for modelling and evaluating mental states in computational collaborative agents. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper looks at how computers can work together to make plans. A few years ago, researchers suggested that knowing what others are thinking (a concept called Theory of Mind) could help with this task. However, they didn’t really study it much. This paper takes a closer look and finds that when computers predict their own missing knowledge, performance gets better, but having Theory of Mind doesn’t make a big difference. They also compare different models to see how well they do with or without Theory of Mind. The results show that knowing what others are thinking isn’t as important as everyone thought. |