Summary of Risk-averse Total-reward Mdps with Erm and Evar, by Xihong Su et al.
Risk-averse Total-reward MDPs with ERM and EVaR
by Xihong Su, Julien Grand-Clément, Marek Petrik
First submitted to arxiv on: 30 Aug 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper proposes novel methods for optimizing risk-averse objectives in discounted Markov Decision Processes (MDPs), focusing on the Total Reward Criterion under Entropic Risk Measure (ERM) and Entropic Value at Risk (EVaR) measures. It shows that a stationary policy can be used to optimize these risk-averse objectives, making analysis, interpretation, and deployment simpler. The authors propose exponential value iteration, policy iteration, and linear programming methods for computing optimal policies, which only require the transient MDP condition and allow both positive and negative rewards. The results suggest that the Total Reward Criterion may be preferred over the discounted criterion in various risk-averse reinforcement learning domains. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper solves a big problem in machine learning by finding an easy way to make decisions when there’s uncertainty involved. Right now, it’s hard to decide what to do because we need to think about all the possible outcomes and their risks. The researchers came up with new ways to calculate these risks using something called the Total Reward Criterion. This makes it easier for computers to learn from experience and make better choices in situations where there’s uncertainty. |
Keywords
» Artificial intelligence » Machine learning » Reinforcement learning