Loading Now

Summary of Identifying Predictions That Influence the Future: Detecting Performative Concept Drift in Data Streams, by Brandon Gower-winter et al.


Identifying Predictions That Influence the Future: Detecting Performative Concept Drift in Data Streams

by Brandon Gower-Winter, Georg Krempl, Sergey Dragomiretskiy, Tineke Jelsma, Arno Siebes

First submitted to arxiv on: 13 Dec 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Cryptography and Security (cs.CR); Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper investigates a phenomenon called “performative concept drift” in stream learning, where a model’s predictions can induce changes in the data it is trained on. This can happen in situations like automated trading or malicious entity detection, where the model’s outputs create self-fulfilling feedback loops. The authors define performative drift and propose a novel approach called CheckerBoard Performative Drift Detection (CB-PDD) to identify it. They test CB-PDD on synthetic and semi-synthetic datasets and show that it can effectively detect performative drift while being resilient to traditional intrinsic drift.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper looks at how models in stream learning can affect the data they’re trained on. This is called “performative concept drift” and happens when a model’s predictions create loops where the data changes based on what the model predicts. They study this in areas like trading or detecting bad actors, where the model’s outputs change the data it sees. The authors define performative drift and make a new way to detect it called CB-PDD. They test it with fake and partly-fake datasets and show that it works well.

Keywords

» Artificial intelligence