Loading Now

Summary of Regularized Robustly Reliable Learners and Instance Targeted Attacks, by Avrim Blum et al.


Regularized Robustly Reliable Learners and Instance Targeted Attacks

by Avrim Blum, Donya Saless

First submitted to arxiv on: 14 Oct 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Cryptography and Security (cs.CR); Data Structures and Algorithms (cs.DS); Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper addresses instance-targeted data poisoning attacks, where an attacker corrupts training data to induce errors on specific test points. The authors propose a notion of robustly-reliable learners that provide per-instance guarantees of correctness under well-defined assumptions, even in the presence of data poisoning attacks. They present a generic optimal (but computationally inefficient) robustly reliable learner and a computationally efficient algorithm for linear separators over log-concave distributions.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper proposes a solution to the problem of instance-targeted data poisoning attacks, which is when an attacker corrupts training data to make a machine learning model make mistakes on specific test points. The authors define what they call “robustly-reliable learners” that can still work correctly even if some of the training data has been attacked. They show how to build these kinds of learners and provide a way to do it quickly.

Keywords

* Artificial intelligence  * Machine learning