Loading Now

Summary of A Random Ensemble Of Encrypted Vision Transformers For Adversarially Robust Defense, by Ryota Iijima et al.


A Random Ensemble of Encrypted Vision Transformers for Adversarially Robust Defense

by Ryota Iijima, Sayaka Shiota, Hitoshi Kiya

First submitted to arxiv on: 11 Feb 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Deep neural networks (DNNs) are known to be vulnerable to adversarial examples. This paper proposes a novel method using vision transformers that combines encrypted models to enhance robustness against both white-box and black-box attacks. The approach is tested on the CIFAR-10 and ImageNet datasets, achieving robustness in image classification tasks. Compared to state-of-the-art defenses, this method outperforms conventional methods in terms of clean accuracy and robust accuracy.
Low GrooveSquid.com (original content) Low Difficulty Summary
A team of researchers has developed a new way to protect computer models from being tricked by fake data. They created a special kind of model that is hard for hackers to understand, making it harder for them to create fake data that would fool the model. The new method was tested on two different datasets and worked well in both cases. It even performed better than other methods that are designed to protect against hacking attacks.

Keywords

» Artificial intelligence  » Image classification