Loading Now

Summary of Psmgd: Periodic Stochastic Multi-gradient Descent For Fast Multi-objective Optimization, by Mingjing Xu et al.


PSMGD: Periodic Stochastic Multi-Gradient Descent for Fast Multi-Objective Optimization

by Mingjing Xu, Peizhong Ju, Jia Liu, Haibo Yang

First submitted to arxiv on: 14 Dec 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a new algorithm called Periodic Stochastic Multi-Gradient Descent (PSMGD) for multi-objective optimization (MOO) in machine learning. The authors highlight the limitations of existing gradient manipulation methods, which require solving an additional optimization problem to determine a common descent direction that can decrease all objectives simultaneously. PSMGD addresses this challenge by periodically computing dynamic weights and utilizing them repeatedly to reduce computational overload. Theoretical analysis shows that PSMGD achieves state-of-the-art convergence rates for strongly-convex, general convex, and non-convex functions. Additionally, the authors introduce a new measure of backpropagation complexity and demonstrate that PSMGD can provide comparable or superior performance while reducing training time.
Low GrooveSquid.com (original content) Low Difficulty Summary
MOO is an important part of many machine learning applications, but it’s often tricky to get right. This paper proposes a new way to do MOO called Periodic Stochastic Multi-Gradient Descent (PSMGD). The idea is to make the process faster and more efficient by periodically updating the weights. The authors show that PSMGD can be really good at finding the best solution quickly, and it works for different types of problems.

Keywords

» Artificial intelligence  » Backpropagation  » Gradient descent  » Machine learning  » Optimization