Summary of A Training Rate and Survival Heuristic For Inference and Robustness Evaluation (trashfire), by Charles Meyers et al.
A Training Rate and Survival Heuristic for Inference and Robustness Evaluation (TRASHFIRE)
by Charles Meyers, Mohammad Reza Saleh Sedghpour, Tommy Löfstedt, Erik Elmroth
First submitted to arxiv on: 24 Jan 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV); Machine Learning (stat.ML)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary In this paper, researchers investigate how machine learning models, particularly deep neural networks, perform in various domains when faced with adversarial counter-examples. They highlight the imbalance between the ease of generating such examples and the time-consuming process of finding successful ones. The authors then focus on understanding how model hyper-parameters influence performance in the presence of an adversary. They propose a survival model-based approach that uses worst-case examples, cost-aware analysis, and rejects model changes during training rather than relying on real-world deployment or expensive verification methods. Through evaluations of various pre-processing techniques, counter-examples, and neural network configurations, they find that deeper models offer marginal gains in survival times due to inference time rather than inherent robustness. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Machine learning models are very good at doing many tasks, but there’s a problem when someone tries to trick them. It takes a lot of time to figure out how to trick the model, but it doesn’t take much time for the model to get fooled. Scientists have been working on fixing this issue, but they haven’t talked about how expensive it is to defend against these tricks. In this paper, researchers explore how changing certain settings in the model affects its performance when someone tries to trick it. They use a special method that helps them quickly decide if making changes to the model is worth it. By testing many different techniques and models, they found that deeper models are only slightly better at resisting these tricks than shallower ones. |
Keywords
* Artificial intelligence * Inference * Machine learning * Neural network