Summary of An Idiosyncrasy Of Time-discretization in Reinforcement Learning, by Kris De Asis et al.
An Idiosyncrasy of Time-discretization in Reinforcement Learning
by Kris De Asis, Richard S. Sutton
First submitted to arxiv on: 21 Jun 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Many reinforcement learning algorithms assume discrete time steps, but physical systems are continuous in time. Digital control requires choosing a time-discretization granularity, and the environment state advances before decisions are made. The relationship between continuous-time and discrete-time returns is crucial when discretizing environments. We highlight an idiosyncrasy in applying naive discrete-time algorithms to discretized environments and propose a simple modification to align return definitions. This matters for physical systems where time-discretization is a choice or inherently stochastic. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Reinforcement learning helps computers learn from experience. In the real world, things happen continuously, like the movement of robots or cars. But most algorithms assume time is divided into small steps. We’re studying how choosing between these step sizes affects learning. We found that when using existing algorithms on discretized environments, we need to adjust how rewards are calculated to get good results. This is important for controlling physical systems where you can choose the step size or it’s random. |
Keywords
* Artificial intelligence * Reinforcement learning