Loading Now

Summary of Reinforcement Learning For Infinite-horizon Average-reward Linear Mdps Via Approximation by Discounted-reward Mdps, By Kihyuk Hong et al.


Reinforcement Learning for Infinite-Horizon Average-Reward Linear MDPs via Approximation by Discounted-Reward MDPs

by Kihyuk Hong, Woojin Chae, Yufan Zhang, Dabeen Lee, Ambuj Tewari

First submitted to arxiv on: 23 May 2024

Categories

  • Main: Machine Learning (stat.ML)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The researchers tackle the challenge of infinite-horizon average-reward reinforcement learning with linear Markov decision processes (MDPs). The Bellman operator’s non-contractive nature makes algorithm design difficult. Previous approaches either suffer from computational inefficiency or require strong assumptions on dynamics for achieving a regret bound of (). This paper proposes the first algorithm that achieves () regret with polynomial computational complexity, without making strong assumptions on dynamics. The approach approximates the average-reward setting by a discounted MDP and applies optimistic value iteration. The proposed algorithm plans for a nonstationary policy through optimistic value iteration and follows it until a specified information metric doubles. Additionally, the paper introduces a value function clipping procedure for sample efficiency.
Low GrooveSquid.com (original content) Low Difficulty Summary
The researchers have found a way to make machines learn from experience without getting stuck in an infinite loop. They’ve developed an algorithm that can learn quickly and efficiently, even when the rules of the game change. The algorithm uses something called optimistic value iteration, which helps it plan ahead and adapt to new situations. This could be useful for things like self-driving cars or robots that need to learn from their mistakes.

Keywords

» Artificial intelligence  » Reinforcement learning