Loading Now

Summary of Modeling Arousal Potential Of Epistemic Emotions Using Bayesian Information Gain: Inquiry Cycle Driven by Free Energy Fluctuations, By Hideyoshi Yanagisawa et al.


Modeling arousal potential of epistemic emotions using Bayesian information gain: Inquiry cycle driven by free energy fluctuations

by Hideyoshi Yanagisawa, Shimon Honda

First submitted to arxiv on: 14 Dec 2023

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: Information Theory (cs.IT); Neurons and Cognition (q-bio.NC); Applications (stat.AP)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes a novel formulation of epistemic emotions like curiosity and interest using information gain generated by the free energy minimization principle. It introduces two types of information gain: Kullback-Leibler divergence (KLD) representing recognition-based free energy reduction, and Bayesian surprise (BS) representing expected information gain from prior updates. The authors show that KLD and BS form an upward-convex function similar to Berlyne’s arousal potential functions or the Wundt curve. They suggest that this framework unifies the free energy principle with arousal potential theory, explaining the Wundt curve as an information gain function and proposing an ideal inquiry process driven by epistemic emotions.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper explores how our emotions, like curiosity and interest, make us want to learn more. It creates a new way to measure these emotions using math formulas that combine two types of “gain” from learning: how much we recognize what we already know (Kullback-Leibler divergence), and how surprised we are by what’s new (Bayesian surprise). The authors show that these gains form a special shape, like a curve. They think this curve helps explain why we get excited to learn when things are new and challenging.

Keywords

» Artificial intelligence