Summary of Robust Distribution Learning with Local and Global Adversarial Corruptions, by Sloan Nietert et al.
Robust Distribution Learning with Local and Global Adversarial Corruptions
by Sloan Nietert, Ziv Goldfeld, Soroosh Shafiee
First submitted to arxiv on: 10 Jun 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Machine Learning (stat.ML)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper proposes a novel approach to learning in an adversarial environment, where a portion of the data is arbitrarily corrupted. The goal is to design a computationally efficient estimator that minimizes the Wasserstein distance between the estimated and true distributions. The authors tackle this problem by developing a method that can simultaneously account for mean estimation, distribution estimation, and settings interpolating between these two extremes. They characterize the optimal population-limit risk and develop an efficient finite-sample algorithm with error bounded by + + (dn^{-1/(k )}). This approach has applications in robust stochastic optimization, where it can help overcome the curse of dimensionality in Wasserstein distributionally robust optimization. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper talks about learning in a tricky situation where some of the data is changed. The goal is to find an efficient way to estimate what the true data looks like, even if we don’t have all of the original data. The authors come up with a new method that can handle this problem and shows how it can be used in fields like machine learning. |
Keywords
» Artificial intelligence » Machine learning » Optimization