Summary of Grounding Language in Multi-perspective Referential Communication, by Zineng Tang et al.
Grounding Language in Multi-Perspective Referential Communication
by Zineng Tang, Lingjun Mao, Alane Suhr
First submitted to arxiv on: 4 Oct 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper introduces a novel task and dataset for referring expression generation and comprehension in multi-agent embodied environments, where two agents must consider each other’s visual perspectives to generate and understand references to objects and spatial relations. The dataset consists of 2,970 human-written referring expressions paired with human comprehension judgments. Automated models are evaluated as speakers and listeners paired with human partners, revealing that they lag behind human pairs in both reference generation and comprehension. To improve model performance, an open-weight speaker model is trained with evidence of communicative success when paired with a listener, resulting in a significant improvement from 58.9 to 69.3% in communicative success. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper creates a new way for computers to understand and generate sentences about objects they can see, taking into account what another computer might be able to see too. The challenge is called “referring expression generation” and it’s like trying to describe a picture to someone else who may not have the same view. Researchers collected a big dataset of human-written descriptions and had computers try to generate their own descriptions, finding that they don’t do as well as humans. To make computers better at this task, researchers trained one type of computer program to learn from feedback when it communicates with another computer. |