Loading Now

Summary of Limited but Consistent Gains in Adversarial Robustness by Co-training Object Recognition Models with Human Eeg, By Manshan Guo and Bhavin Choksi and Sari Sadiya and Alessandro T. Gifford and Martina G. Vilas and Radoslaw M. Cichy and Gemma Roig


Limited but consistent gains in adversarial robustness by co-training object recognition models with human EEG

by Manshan Guo, Bhavin Choksi, Sari Sadiya, Alessandro T. Gifford, Martina G. Vilas, Radoslaw M. Cichy, Gemma Roig

First submitted to arxiv on: 5 Sep 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Human-Computer Interaction (cs.HC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a novel approach to improving the robustness of artificial neural networks (ANNs) against adversarial attacks by aligning model representations to human brain activity. The researchers trained ResNet50-backbone models on a dual task of classification and EEG prediction, using a rich set of real-world images as stimuli. They found that the networks’ EEG prediction accuracy was significantly correlated with their gains in adversarial robustness, particularly around 100 ms post-stimulus onset. The study also explored the contribution of individual EEG channels to this effect, finding strongest contributions from parieto-occipital regions. This work demonstrates the potential of human EEG data for improving ANNs’ robustness and opens up avenues for future research.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps artificial neural networks (ANNs) be better at dealing with tricky attacks by making them learn like humans do. Scientists trained special models that could both recognize pictures and predict what’s happening in people’s brains when they look at those pictures. They found that the more accurately these models predicted brain activity, the better they were at defending against sneaky attacks. This is an important step towards making ANNs more reliable and opens up new ways to study how humans think.

Keywords

» Artificial intelligence  » Classification