Loading Now

Summary of The Pitfalls and Promise Of Conformal Inference Under Adversarial Attacks, by Ziquan Liu et al.


The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks

by Ziquan Liu, Yufei Cui, Yan Yan, Yi Xu, Xiangyang Ji, Xue Liu, Antoni B. Chan

First submitted to arxiv on: 14 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
In this research paper, a team of experts investigates the uncertainty inherent in deep learning models used for safety-critical applications like medical imaging and autonomous driving. The focus is on enhancing both adversarial robustness and reliable uncertainty quantification to protect against potential attacks and ensure accurate decision-making. To address the gap between existing methods and the need for uncertainty quantification, the study explores conformal prediction (CP) in the context of standard adversarial attacks within the adversarial defense community. The findings reveal that CP methods do not produce informative prediction sets when using unadversarially trained models, highlighting the importance of adversarial training for CP. Furthermore, the paper demonstrates that prediction set size (PSS) of CP using adversarially trained models is often worse than standard AT, prompting research into CP-efficient AT for improved PSS. The proposed approach optimizes a Beta-weighting loss with an entropy minimization regularizer during AT to improve CP-efficiency, which is theoretically analyzed as an upper bound of PSS at the population level. Empirical studies on four image classification datasets across three popular AT baselines validate the effectiveness of the proposed Uncertainty-Reducing AT (AT-UR).
Low GrooveSquid.com (original content) Low Difficulty Summary
The study explores how deep learning models can be made more reliable and robust in safety-critical applications like medical imaging and autonomous driving. The team focuses on understanding the uncertainty inherent in these models, which is crucial for making accurate decisions. They investigate conformal prediction (CP) in the context of standard adversarial attacks and find that existing methods don’t work well when using unadversarially trained models. To improve this, they propose a new approach called Uncertainty-Reducing AT (AT-UR), which optimizes a special loss function to make CP more efficient.

Keywords

» Artificial intelligence  » Deep learning  » Image classification  » Loss function  » Prompting