Loading Now

Summary of Robust Q-learning Under Corrupted Rewards, by Sreejeet Maity and Aritra Mitra


Robust Q-Learning under Corrupted Rewards

by Sreejeet Maity, Aritra Mitra

First submitted to arxiv on: 5 Sep 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Systems and Control (eess.SY); Optimization and Control (math.OC); Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This research paper investigates the robustness of Q-learning algorithms in the presence of corrupted rewards. The study focuses on a strong-contamination attack model where an adversary can arbitrarily perturb a small fraction of observed rewards. The authors start by proving that such attacks can cause vanilla Q-learning to incur arbitrarily large errors. To address this, they develop a novel robust synchronous Q-learning algorithm that uses historical reward data to construct robust empirical Bellman operators at each time step. The algorithm is proven to achieve finite-time convergence rates matching state-of-the-art bounds up to a small error term that scales with the adversarial corruption fraction. The results hold even when true reward distributions have infinite support, provided they admit bounded second moments.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper looks into how well Q-learning works when some rewards are fake or wrong. Researchers want to know if Q-learning can still learn correctly even when a small part of the rewards it gets are changed by an attacker. They found that the normal Q-learning way doesn’t work well in this case and developed a new, better algorithm called Robust Synchronous Q-Learning. This new algorithm uses old reward information to make sure its updates are safe from the fake rewards. The study shows that their new algorithm works as well as the best others do, except for some small mistakes that depend on how much the attacker changes the rewards.

Keywords

* Artificial intelligence