Loading Now

Summary of In-depth Analysis Of Low-rank Matrix Factorisation in a Federated Setting, by Constantin Philippenko et al.


In-depth Analysis of Low-rank Matrix Factorisation in a Federated Setting

by Constantin Philippenko, Kevin Scaman, Laurent Massoulié

First submitted to arxiv on: 13 Sep 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Optimization and Control (math.OC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper presents a distributed algorithm for computing low-rank matrix factorizations on multiple clients, each holding local datasets. The goal is to minimize the squared Frobenius norm between the local datasets and their approximations using a common set of factors. The authors use a power initialization for one of the factors and rewrite the non-convex problem as a strongly-convex one, which they solve using parallel Nesterov gradient descent. They prove a linear rate of convergence for the excess loss, improving previous results. Experiments are provided on both synthetic and real data.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper is about a new way to do math problems that involves lots of computers working together. The goal is to make a good guess at what a big matrix should look like by combining smaller pieces of information from many different places. The authors come up with a clever trick to help the computers work together better and show that this trick makes their method much faster than previous methods. They test their idea on some fake data and some real data, and it seems to work really well.

Keywords

» Artificial intelligence  » Gradient descent