Summary of Low-tubal-rank Tensor Recovery Via Factorized Gradient Descent, by Zhiyu Liu et al.
Low-Tubal-Rank Tensor Recovery via Factorized Gradient Descent
by Zhiyu Liu, Zhi Han, Yandong Tang, Xi-Le Zhao, Yao Wang
First submitted to arxiv on: 22 Jan 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Optimization and Control (math.OC); Machine Learning (stat.ML)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper addresses the challenge of recovering a tensor with low-tubal-rank structure from a small number of corrupted linear measurements. Traditional methods require computationally intensive tensor Singular Value Decomposition (t-SVD), making them impractical for large-scale tensors. To overcome this, the authors propose an efficient and effective low-tubal-rank tensor recovery method based on the Burer-Monteiro (BM) factorization procedure. By decomposing a large tensor into two smaller factor tensors and solving through factorized gradient descent (FGD), the approach eliminates the need for t-SVD computation, reducing computational costs and storage requirements. Theoretical analysis ensures convergence under both noise-free and noisy situations, with robust performance even when the tubal-rank is slightly overestimated. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper tries to solve a problem where we have a big tensor that’s not complete because some of its measurements are missing or wrong. We usually need to do something called t-SVD to fix this, but it takes too long and uses too much space for really big tensors. To make it faster and more efficient, the authors came up with a new way to fix the problem using something called the Burer-Monteiro method. It breaks down the big tensor into smaller pieces and then solves the problem by looking at these pieces separately. This makes it much faster and uses less space. |
Keywords
* Artificial intelligence * Gradient descent