Loading Now

Summary of Is It the Model or the Metric — on Robustness Measures Of Deeplearning Models, by Zhijin Lyu et al.


Is it the model or the metric – On robustness measures of deeplearning models

by Zhijin Lyu, Yutong Jin, Sneha Das

First submitted to arxiv on: 13 Dec 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computer Vision and Pattern Recognition (cs.CV)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Deep learning models are widely used in high-stake applications like healthcare, education, and border control. However, understanding their limitations is crucial to ensure successful and safe deployment. This paper investigates robustness, particularly in deepfake detection, by introducing a new metric called robust ratio (RR) that measures changes to normalized or probability outcomes under input perturbation. The authors also compare robust accuracy (RA) with RR and demonstrate that despite similar RA between models, they exhibit varying RR under different tolerance levels.
Low GrooveSquid.com (original content) Low Difficulty Summary
Deep learning models are being used in many important areas like healthcare, schools, and border control. It’s very important to understand how these models might not work correctly in certain situations, so we can make sure they’re safe to use. This paper is about making sure deepfake detection models are robust and don’t fail when input data changes slightly. They introduce a new way to measure this called robust ratio (RR). The authors also compare RR with another method called robust accuracy (RA) and show that even though the models have similar RA, they behave differently when faced with small changes in the data.

Keywords

* Artificial intelligence  * Deep learning  * Probability