Loading Now

Summary of Calibration Attacks: a Comprehensive Study Of Adversarial Attacks on Model Confidence, by Stephen Obadinma et al.


Calibration Attacks: A Comprehensive Study of Adversarial Attacks on Model Confidence

by Stephen Obadinma, Xiaodan Zhu, Hongyu Guo

First submitted to arxiv on: 5 Jan 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Cryptography and Security (cs.CR)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper investigates calibration attacks, a type of adversarial attack that targets the confidence of machine learning models without altering their predicted labels. The authors propose four types of calibration attacks: underconfidence, overconfidence, maximum miscalibration, and random confidence attacks, which can be applied in both black-box and white-box settings. They demonstrate that these attacks are highly effective on convolutional and attention-based models, significantly skewing confidence without affecting predictive performance. To mitigate the harm, the authors evaluate a range of adversarial defence and recalibration methods, including their proposed defences designed specifically for calibration attacks. The study highlights the limitations in handling calibration attacks and provides detailed analyses to understand their characteristics.
Low GrooveSquid.com (original content) Low Difficulty Summary
Calibration attacks are sneaky threats to machine learning models that can make them super confident or super uncertain without changing what they predict. This paper looks into these attacks and shows how easy it is to trick even good models like convolutional networks and attention-based models. The researchers also test ways to defend against these attacks and found that some methods work better than others. Overall, the study warns us about the dangers of calibration attacks and gives us tools to fight back.

Keywords

* Artificial intelligence  * Attention  * Machine learning