Loading Now

Summary of Optimal Classification Under Performative Distribution Shift, by Edwige Cyffers (magnet) et al.


Optimal Classification under Performative Distribution Shift

by Edwige Cyffers, Muni Sreenivas Pydi, Jamal Atif, Olivier Cappé

First submitted to arxiv on: 4 Nov 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed novel view models performative effects in algorithmic decisions as push-forward measures, generalizing existing models and enabling efficient and scalable learning strategies for distribution shifts. The framework assumes knowledge of the shift operator representing performative changes, unlike previous models requiring full data distribution specification. It can be integrated into change-of-variable-based models like VAEs or normalizing flows. Focusing on classification with a linear-in-parameters performative effect, the paper proves convexity of the performative risk under new assumptions, connecting to adversarially robust classification by reformulating the minimization of the performative risk as a min-max variational problem.
Low GrooveSquid.com (original content) Low Difficulty Summary
Performative learning is important because it helps us understand how algorithmic decisions can change the data distribution. The authors propose a new way to model these changes, called push-forward measures. This approach allows for more efficient and scalable learning strategies when dealing with changing data distributions. They also show that their method connects to adversarially robust classification, which is important for ensuring fairness in machine learning models.

Keywords

* Artificial intelligence  * Classification  * Machine learning