Loading Now

Summary of Unsupervised Model Diagnosis, by Yinong Oliver Wang et al.


Unsupervised Model Diagnosis

by Yinong Oliver Wang, Eileen Li, Jinqi Luo, Zhaoning Wang, Fernando De la Torre

First submitted to arxiv on: 8 Oct 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes Unsupervised Model Diagnosis (UMO), a novel framework for evaluating the robustness and explainability of deep vision systems. The current methods for assessing robustness rely on collecting and annotating extensive test sets, which is labor-intensive and expensive with no guarantee of sufficient coverage across attributes of interest. UMO leverages generative models to produce semantic counterfactual explanations without any user guidance, optimizing for the most counterfactual directions in a generative latent space. The framework identifies and visualizes changes in semantics, matching these changes to attributes from wide-ranging text sources such as dictionaries or language models. Experiments on multiple vision tasks demonstrate that UMO can correctly highlight spurious correlations and visualize the failure mode of target models without any human intervention.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps us understand how to make sure deep learning computer vision systems are reliable and work well even when things get tricky. Right now, we have to collect lots of data and label it, which takes a lot of time and money, but doesn’t guarantee that our system will work in all situations. A new way to diagnose models is being developed, using generative models to explain why the model makes certain decisions without needing any human input. This approach can help us see where the model might be making mistakes and why.

Keywords

» Artificial intelligence  » Deep learning  » Latent space  » Semantics  » Unsupervised