Loading Now

Summary of Out-of-distribution Data: An Acquaintance Of Adversarial Examples — a Survey, by Naveen Karunanayake et al.


Out-of-Distribution Data: An Acquaintance of Adversarial Examples – A Survey

by Naveen Karunanayake, Ravin Gunawardena, Suranga Seneviratne, Sanjay Chawla

First submitted to arxiv on: 8 Apr 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A recently published survey explores the intersection of out-of-distribution (OOD) detection and adversarial robustness in deep neural networks (DNNs). The study examines how researchers have investigated these two areas together, leading to the identification of two key research directions: robust OOD detection and unified robustness. Robust OOD detection focuses on differentiating between in-distribution data and OOD data, even when manipulated adversarially to deceive the detector. Unified robustness seeks a single approach to make DNNs resistant to both adversarial attacks and OOD inputs. The survey establishes a taxonomy based on distributional shifts, reviewing existing work on robust OOD detection and unified robustness while highlighting limitations and proposing promising research directions.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about how deep neural networks can get confused when they encounter data that’s not normal or has been intentionally changed to trick them. It looks at two related problems: detecting when the data is unusual, and making sure the network isn’t fooled by fake data. The study finds that these problems are connected, and there are different ways to approach solving both of them. It also reviews what other researchers have done on this topic and suggests new areas to explore.

Keywords

* Artificial intelligence