Summary of Towards Robust Model-based Reinforcement Learning Against Adversarial Corruption, by Chenlu Ye et al.
Towards Robust Model-Based Reinforcement Learning Against Adversarial Corruption
by Chenlu Ye, Jiafan He, Quanquan Gu, Tong Zhang
First submitted to arxiv on: 14 Feb 2024
Categories
- Main: Machine Learning (stat.ML)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This study addresses adversarial corruption in model-based reinforcement learning (RL), a crucial concern when transition dynamics are manipulated by an adversary. While existing research focuses on model-free RL, this paper targets model-based RL using maximum likelihood estimation (MLE). The authors introduce two algorithms: corruption-robust optimistic MLE (CR-OMLE) for online settings and corruption-robust pessimistic MLE (CR-PMLE) for offline settings. CR-OMLE achieves a regret of O(sqrt(T) + C), where T is the number of episodes, and C represents cumulative corruption. A lower bound proves that this additive dependence on C is optimal. The authors also extend their technique to offline settings, proposing CR-PMLE, which exhibits suboptimality worsened by O(C/n). This research provides the first provable guarantees for corruption-robust model-based RL algorithms. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This study looks at a big problem in artificial intelligence called adversarial corruption. Imagine someone is trying to trick an AI system by changing how it learns about the world. The researchers developed new ways for AI systems to learn and make decisions even when they’re being corrupted like this. They created two algorithms that can handle these attempts to corrupt their learning. One algorithm works well in real-time, while the other one is better suited for situations where you have a lot of data. This study is important because it helps ensure that AI systems are more reliable and less susceptible to attacks. |
Keywords
* Artificial intelligence * Likelihood * Reinforcement learning