Loading Now

Summary of Pragmatic Instruction Following and Goal Assistance Via Cooperative Language-guided Inverse Planning, by Tan Zhi-xuan et al.


Pragmatic Instruction Following and Goal Assistance via Cooperative Language-Guided Inverse Planning

by Tan Zhi-Xuan, Lance Ying, Vikash Mansinghka, Joshua B. Tenenbaum

First submitted to arxiv on: 27 Feb 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: Computation and Language (cs.CL); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper introduces a new architecture for assistive agents that can follow ambiguous instructions in a flexible and context-sensitive manner. The proposed model, cooperative language-guided inverse plan search (CLIPS), uses Bayesian inference to model human planners who communicate joint plans with the assistant. This allows the agent to evaluate the likelihood of an instruction given a hypothesized plan, using large language models (LLMs) for evaluation. The agent then acts to minimize expected goal achievement cost, enabling it to pragmatically follow ambiguous instructions and provide effective assistance. The authors evaluate CLIPS in two cooperative planning domains, finding that it outperforms GPT-4V, LLM-based literal instruction following, and unimodal inverse planning in both accuracy and helpfulness.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper creates a new way for computers to understand human instructions that are not clear without more information. The computer model is designed to work together with humans to plan and follow instructions. It uses big language models to figure out what the instruction means, based on what the person does and says. This helps the computer provide helpful assistance even when it’s unsure about the goal. The authors tested this system in two scenarios and found that it performed better than other methods.

Keywords

* Artificial intelligence  * Bayesian inference  * Gpt  * Likelihood