Summary of On the Computability Of Robust Pac Learning, by Pascale Gourdeau et al.
On the Computability of Robust PAC Learning
by Pascale Gourdeau, Tosca Lechner, Ruth Urner
First submitted to arxiv on: 14 Jun 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The study explores computability requirements for adversarially robust learning, introducing robust computable PAC (robust CPAC) learning and providing sufficient conditions. The framework exhibits surprising effects, such as the absence of a requirement for computably evaluable robust loss. A novel dimension, the computable robust shattering dimension, is introduced to characterize properties, showing finiteness is necessary but not sufficient for robust CPAC learnability. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Adversarially robust learning makes sure that machines can still make good decisions even when faced with tricky or misleading data. This paper looks at how computers can be taught to learn from this kind of data without getting fooled. The authors create a new way of thinking about this problem and show some surprising things, like the fact that it’s not necessary for the computer to be able to accurately measure its mistakes. They also introduce a new idea called “computable robust shattering dimension” which helps us understand how computers can learn from tricky data. |