Loading Now

Summary of Trusted Multi-view Learning with Label Noise, by Cai Xu et al.


Trusted Multi-view Learning with Label Noise

by Cai Xu, Yilin Zhang, Ziyu Guan, Wei Zhao

First submitted to arxiv on: 18 Apr 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a trusted multi-view learning method called Trusted Multi-View Noise Refining (TMNR) to address the issue of decision uncertainty in safety-critical applications. Traditional multi-view methods focus on improving accuracy but neglect uncertainty, which is crucial for making reliable decisions. TMNR learns class distributions and estimates classification probabilities and uncertainty by incorporating noisy labels. The proposed method uses evidential deep neural networks to construct view-opinions with belief mass vectors and uncertainty estimates. View-specific noise correlation matrices are designed to align the original opinions with noisy labels, considering label noises from low-quality data features and easily-confused classes. TMNR aggregates the noisy opinions and employs a generalized maximum likelihood loss for model training. Empirical results on 5 publicly available datasets show that TMNR outperforms state-of-the-art baselines in terms of accuracy, reliability, and robustness.
Low GrooveSquid.com (original content) Low Difficulty Summary
This research paper is about developing a new method to make decisions more reliable. Right now, most decision-making methods focus only on getting the right answer, but they don’t consider how uncertain the answer might be. This is a problem for safety-critical applications where uncertainty can have serious consequences. The proposed method, called Trusted Multi-View Noise Refining (TMNR), tries to fix this by learning how likely it is that a decision is correct and how confident we should be in that decision. TMNR uses special types of neural networks and algorithms to work with noisy labels, which are common when data is incomplete or incorrect.

Keywords

» Artificial intelligence  » Classification  » Likelihood