Loading Now

Summary of Intrinsic Rewards For Exploration Without Harm From Observational Noise: a Simulation Study Based on the Free Energy Principle, by Theodore Jerome Tinker et al.


Intrinsic Rewards for Exploration without Harm from Observational Noise: A Simulation Study Based on the Free Energy Principle

by Theodore Jerome Tinker, Kenji Doya, Jun Tani

First submitted to arxiv on: 13 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed paper in Reinforcement Learning (RL) aims to improve exploration efficiency by introducing two rewards: entropy of action policy and curiosity for information gain. The authors build upon established entropy-based methods that promote randomized action selection and propose a novel approach, hidden state curiosity, which rewards agents based on the KL divergence between predictive prior and posterior probabilities of latent variables. This is contrasted with prediction error curiosity, which may be distracted by observational noises, known as curiosity traps. The study trains six types of agents to navigate mazes: baseline agents without rewards for entropy or curiosity, and agents rewarded for entropy and/or either prediction error curiosity or hidden state curiosity. Results show that entropy and curiosity result in efficient exploration, especially when both are employed together, with hidden state curiosity demonstrating resilience against curiosity traps.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper is about teaching machines to make good decisions by exploring their environment and learning from mistakes. Right now, these machines can get stuck in loops because they’re too focused on finding rewards. The authors propose a new way for the machines to learn by looking at what they don’t know yet. This helps them avoid getting stuck and makes them better at solving problems.

Keywords

» Artificial intelligence  » Reinforcement learning