Loading Now

Summary of On the Stochastic (variance-reduced) Proximal Gradient Method For Regularized Expected Reward Optimization, by Ling Liang and Haizhao Yang


On the Stochastic (Variance-Reduced) Proximal Gradient Method for Regularized Expected Reward Optimization

by Ling Liang, Haizhao Yang

First submitted to arxiv on: 23 Jan 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Optimization and Control (math.OC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper presents a novel approach to solving a regularization expected reward optimization problem in reinforcement learning (RL), which covers many existing problems. The authors analyze the classical stochastic proximal gradient method and show that it admits an O(ε^(-4)) sample complexity to an ε-stationary point under standard conditions. To improve convergence, they also propose an efficient stochastic variance-reduced proximal gradient method with an importance sampling-based Probabilistic Gradient Estimator (PAGE). The authors’ analysis shows that the sample complexity can be improved from O(ε^(-4)) to O(ε^(-3)) under additional conditions. The proposed methods match the sample complexity of their most competitive counterparts for discounted Markov decision processes under similar settings.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper solves a tricky problem in machine learning, which helps make robots and computers smarter. It uses a special way of updating models called proximal gradient method. This method is fast but sometimes gets stuck because it’s noisy. To fix this, the authors use another technique that reduces noise. They show that their new method works better than previous ones for certain types of problems. This is important because it helps us build more intelligent systems.

Keywords

* Artificial intelligence  * Machine learning  * Optimization  * Regularization  * Reinforcement learning