Loading Now

Summary of Mitigating Adversarial Perturbations For Deep Reinforcement Learning Via Vector Quantization, by Tung M. Luu et al.


Mitigating Adversarial Perturbations for Deep Reinforcement Learning via Vector Quantization

by Tung M. Luu, Thanh Nguyen, Tee Joshua Tian Jin, Sungwoon Kim, Chang D. Yoo

First submitted to arxiv on: 4 Oct 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Recent studies have shown that well-performing reinforcement learning (RL) agents often lack resilience against adversarial perturbations during deployment. To address this issue, most prior works focus on developing robust training-based procedures to tackle the problem. In contrast, this work proposes an input transformation-based defense for RL, specifically using a variant of vector quantization (VQ) as a transformation for input observations. This approach reduces the space of adversarial attacks during testing, making the transformed observations less affected by attacks. The proposed method is computationally efficient and integrates seamlessly with adversarial training, further enhancing the robustness of RL agents against adversarial attacks. Through extensive experiments in multiple environments, we demonstrate that using VQ as the input transformation effectively defends against adversarial attacks on the agent’s observations.
Low GrooveSquid.com (original content) Low Difficulty Summary
Imagine you’re playing a game where you need to make smart decisions to win. But what if someone tried to trick you by making weird moves? That’s kind of like what happens when an artificial intelligence (AI) is attacked in real life. Most people try to fix this problem by training the AI better. Instead, this new approach tries to change how the AI looks at the game before it starts playing. This makes it harder for someone to trick the AI later on. The scientists tested this idea and found that it really works! They used a special technique called vector quantization (VQ) to make the AI’s observations less susceptible to attacks.

Keywords

* Artificial intelligence  * Quantization  * Reinforcement learning