Loading Now

Summary of Mango: a Benchmark For Evaluating Mapping and Navigation Abilities Of Large Language Models, by Peng Ding and Jiading Fang and Peng Li and Kangrui Wang and Xiaochen Zhou and Mo Yu and Jing Li and Matthew R. Walter and Hongyuan Mei


MANGO: A Benchmark for Evaluating Mapping and Navigation Abilities of Large Language Models

by Peng Ding, Jiading Fang, Peng Li, Kangrui Wang, Xiaochen Zhou, Mo Yu, Jing Li, Matthew R. Walter, Hongyuan Mei

First submitted to arxiv on: 29 Mar 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Robotics (cs.RO)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes MANGO, a benchmark to evaluate the capabilities of large language models in performing text-based mapping and navigation tasks. The benchmark consists of 53 mazes from textgames, each paired with a walkthrough that does not cover all possible paths. A large language model reads the walkthrough and answers hundreds of mapping and navigation questions, such as “How should you go to Attic from West of House?” and “Where are we if we go north and east from Cellar?”. The results show that even GPT-4, the best-to-date language model, performs poorly in answering these questions. Further, the paper suggests that strong mapping and navigation abilities would benefit large language models in performing relevant downstream tasks, such as playing textgames. The MANGO benchmark will facilitate future research on methods to improve the mapping and navigation capabilities of language models.
Low GrooveSquid.com (original content) Low Difficulty Summary
A new way to test how good big language models are at understanding maps and giving directions has been created. It’s called MANGO, and it uses 53 mazes from text games to see if these powerful AI systems can figure out where you need to go and how to get there. Even the best one, GPT-4, does pretty poorly when asked simple questions like “How do I get to Attic from West of House?” or “Where am I if I go north and east from Cellar?”. The researchers think that being good at understanding maps would help these language models play text games better. They’re making the MANGO benchmark available so others can use it to improve their own AI systems.

Keywords

» Artificial intelligence  » Gpt  » Language model  » Large language model