Loading Now

Summary of Wasserstein Nonnegative Tensor Factorization with Manifold Regularization, by Jianyu Wang et al.


Wasserstein Nonnegative Tensor Factorization with Manifold Regularization

by Jianyu Wang, Linruize Tang

First submitted to arxiv on: 3 Jan 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper introduces Wasserstein manifold nonnegative tensor factorization (WMNTF), a novel approach for feature extraction and part-based representation from high-order data that preserves intrinsic structure information. WMNTF utilizes the Wasserstein distance, also known as Earth Mover’s distance or Optimal Transport distance, to minimize the difference between input tensorial data and reconstruction. This method takes into account both the correlation information of features and manifold information of samples. The authors incorporate a graph regularizer into the latent factor to leverage spatial structure information. Compared with other nonnegative matrix factorization (NMF) and nonnegative tensor factorization (NTF) methods, WMNTF demonstrates superior performance in experimental results.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about using a new way to group similar things together called Wasserstein manifold nonnegative tensor factorization or WMNTF. It helps us find patterns in big datasets that have many features and relationships between them. The problem with old methods was they didn’t consider how the different features are connected. This new method does, which makes it more powerful for finding useful information from these complex datasets. In simple terms, this paper is about making computers better at understanding and organizing large amounts of data.

Keywords

* Artificial intelligence  * Feature extraction