Summary of Mitigating the Curse Of Dimensionality For Certified Robustness Via Dual Randomized Smoothing, by Song Xia et al.
Mitigating the Curse of Dimensionality for Certified Robustness via Dual Randomized Smoothing
by Song Xia, Yi Yu, Xudong Jiang, Henghui Ding
First submitted to arxiv on: 15 Apr 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A novel approach to certifying the robustness of image classifiers, Randomized Smoothing (RS), has shown promise in endowing arbitrary models with robustness. However, RS is limited by the curse of dimensionality, as the upper bound of certified robustness radius decreases proportionally at a rate of 1/√d with input dimension d. This paper proposes Dual Randomized Smoothing (DRS) to overcome this limitation by down-sampling and smoothing in lower-dimensional spaces. DRS guarantees a tight certified robustness radius for high-dimensional inputs and achieves a superior upper bound on the robustness radius, decreasing at a rate of (1/√m + 1/√n) with input dimension d = m+n. Experimental results demonstrate the effectiveness of DRS, which integrates well with established methods to improve accuracy and certified robustness baselines on CIFAR-10 and ImageNet datasets. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary A new way to make image recognition models more reliable is being explored. Right now, a method called Randomized Smoothing (RS) can give arbitrary models some level of protection against mistakes. However, RS has a problem – as the size of the images increases, it becomes less effective. This paper proposes a new approach called Dual Randomized Smoothing (DRS) that can handle larger images better. DRS works by breaking down the image into smaller pieces and then smoothing those pieces in lower dimensions. Theoretical calculations show that DRS provides more robustness than RS, especially for large images. Testing with real-world data shows that DRS is effective and can be used to improve recognition accuracy and robustness on popular datasets. |