Loading Now

Summary of Effects Of Exponential Gaussian Distribution on (double Sampling) Randomized Smoothing, by Youwei Shu et al.


Effects of Exponential Gaussian Distribution on (Double Sampling) Randomized Smoothing

by Youwei Shu, Xi Xiao, Derui Wang, Yuxin Cao, Siji Chen, Jason Xue, Linyi Li, Bo Li

First submitted to arxiv on: 4 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Randomized Smoothing (RS) is a scalable certified defense method that provides robustness certification against adversarial examples. This paper investigates the effect of two families of distributions, Exponential Standard Gaussian (ESG) and Exponential General Gaussian (EGG), on RS and Double Sampling Randomized Smoothing (DSRS). The authors derive an analytic formula for ESG’s certified radius, which converges to the origin formula of RS as the dimension increases. They also prove that EGG provides tighter constant factors than DSRS in providing lower bounds of certified radius, addressing the curse of dimensionality in RS. Experimental results on real-world datasets confirm the theoretical analysis, showing that ESG distributions provide almost the same certification for both RS and DSRS. Additionally, EGG brings significant improvement to DSRS certification, with an increase in certified accuracy up to 6.4% on ImageNet.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about making computers more resistant to fake information. The authors are trying to figure out how to make a method called Randomized Smoothing better at spotting fake pictures and other data. They’re looking at two different ways of doing this, and they found that one way works really well on big datasets like ImageNet. This new way can spot fake images up to 6.4% better than the old way! The authors want to make computers more trustworthy by making them better at recognizing real from fake data.

Keywords

» Artificial intelligence