Loading Now

Summary of Expil: Explanatory Predicate Invention For Learning in Games, by Jingyuan Sha et al.


EXPIL: Explanatory Predicate Invention for Learning in Games

by Jingyuan Sha, Hikaru Shindo, Quentin Delfosse, Kristian Kersting, Devendra Singh Dhami

First submitted to arxiv on: 10 Jun 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a novel approach called Explanatory Predicate Invention for Learning in Games (EXPIL) to improve explainability in reinforcement learning (RL). Current RL models are often black-boxes, making it difficult to understand the reasoning behind agent actions. Recent work has attempted to address this issue by using pretrained neural agents to encode logic-based policies, but these approaches require large amounts of predefined background knowledge. EXPIL identifies and extracts predicates from a pretrained neural agent, reducing the dependency on predefined background knowledge. The approach is evaluated on various games, demonstrating its effectiveness in achieving explainable behavior while requiring less background knowledge.
Low GrooveSquid.com (original content) Low Difficulty Summary
This research paper tries to make it easier to understand why artificial agents are making certain decisions in games. Right now, these agents are like black boxes – we don’t know how they’re making their moves. The researchers want to change that by creating a new way to figure out what the agent is thinking. This new approach uses information from other smart agents to help it make better choices. It’s tested on different games and shown to be successful in making decisions that can be understood.

Keywords

» Artificial intelligence  » Reinforcement learning