Loading Now

Summary of Implicit Bias Of Policy Gradient in Linear Quadratic Control: Extrapolation to Unseen Initial States, by Noam Razin et al.


Implicit Bias of Policy Gradient in Linear Quadratic Control: Extrapolation to Unseen Initial States

by Noam Razin, Yotam Alexander, Edo Cohen-Karlik, Raja Giryes, Amir Globerson, Nadav Cohen

First submitted to arxiv on: 12 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Systems and Control (eess.SY); Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A recent machine learning study reveals that gradient descent in reinforcement learning exhibits an implicit bias that improves performance on unseen data. In supervised learning, this phenomenon was well-studied, but its implications for optimal control were unclear. This paper investigates the extent to which learned controllers generalize to new initial states, focusing on the Linear Quadratic Regulator (LQR) problem. The authors show that extrapolation depends on the exploration degree induced by the system during training. Experimental results confirm these findings and demonstrate their applicability to non-linear systems with neural network controllers.
Low GrooveSquid.com (original content) Low Difficulty Summary
A team of researchers found an interesting thing about how machine learning models work. They looked at a type of model called “policy gradient” that helps make decisions in complex situations. The scientists discovered that this model has a hidden ability to do well even when it hasn’t seen the situation before. This is important because it means we can use these models to make better choices in new and unexpected situations.

Keywords

* Artificial intelligence  * Gradient descent  * Machine learning  * Neural network  * Reinforcement learning  * Supervised