Loading Now

Summary of Robust Deep Reinforcement Learning with Adaptive Adversarial Perturbations in Action Space, by Qianmei Liu and Yufei Kuang and Jie Wang


Robust Deep Reinforcement Learning with Adaptive Adversarial Perturbations in Action Space

by Qianmei Liu, Yufei Kuang, Jie Wang

First submitted to arxiv on: 20 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Deep reinforcement learning (DRL) algorithms can struggle with simulation-to-real-world discrepancies. To address this, many studies employ adversarial learning to generate perturbations during training, aiming to improve robustness. However, fixed parameters controlling the intensity of these perturbations often lead to a trade-off between average performance and robustness. Our proposed method, Adaptive Adversarial Perturbation (A2P), addresses this issue by dynamically selecting suitable adversarial perturbations for each sample. We introduce an adaptive adversarial coefficient framework that adjusts the perturbation effect during training based on the current intensity metric. This simple yet effective approach can be deployed in real-world applications without requiring simulator access. Our MuJoCo experiments demonstrate improved training stability and robust policy learning when migrated to different test environments.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper talks about how deep reinforcement learning (DRL) algorithms have a problem when moving from a simulation world to the real world. To solve this, some people use something called adversarial learning to help DRL algorithms learn better. However, most of these methods use a fixed setting that can make it hard to get both good performance and robustness. Our new approach is called Adaptive Adversarial Perturbation (A2P) and helps by adjusting the level of perturbation for each sample during training. This makes it easier to train DRL algorithms without needing special simulator access. We tested our method on a game-like environment and saw that it worked better than other methods.

Keywords

» Artificial intelligence  » Reinforcement learning