Summary of Double Duality: Variational Primal-dual Policy Optimization For Constrained Reinforcement Learning, by Zihao Li et al.
Double Duality: Variational Primal-Dual Policy Optimization for Constrained Reinforcement Learning
by Zihao Li, Boyi Liu, Zhuoran Yang, Zhaoran Wang, Mengdi Wang
First submitted to arxiv on: 16 Feb 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Optimization and Control (math.OC); Machine Learning (stat.ML)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper presents a model-based algorithm called Variational Primal-Dual Policy Optimization (VPDPO) to solve Constrained Convex Markov Decision Processes (MDPs). The goal is to minimize a convex functional of the visitation measure, subject to a convex constraint. The VPDPO algorithm uses Lagrangian and Fenchel duality to reformulate the constrained problem into an unconstrained primal-dual optimization. It updates primal variables using model-based value iteration with Optimism in the Face of Uncertainty (OFU) and dual variables using gradient ascent. The algorithm is applicable to large state spaces by incorporating function approximation. The paper proves that VPDPO achieves sublinear regret and constraint violation in two notable examples: Kernelized Nonlinear Regulators and Low-rank MDPs. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper creates a new way to solve big problems called Constrained Convex Markov Decision Processes (MDPs). It’s like trying to find the best path through a really complicated maze. The authors invented a special tool called Variational Primal-Dual Policy Optimization (VPDPO) that helps solve these kinds of problems. VPDPO is good at handling big state spaces and finding the right balance between exploring new paths and sticking with what works. The paper shows that this tool can help us find the best solution to these complex problems. |
Keywords
* Artificial intelligence * Optimization