Summary of A First-order Multi-gradient Algorithm For Multi-objective Bi-level Optimization, by Feiyang Ye et al.
A First-Order Multi-Gradient Algorithm for Multi-Objective Bi-Level Optimization
by Feiyang Ye, Baijiong Lin, Xiaofeng Cao, Yu Zhang, Ivor Tsang
First submitted to arxiv on: 17 Jan 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper tackles the Multi-Objective Bi-Level Optimization (MOBLO) problem, where an upper-level subproblem is a multi-objective optimization and the lower-level subproblem is scalar. Existing methods require computing the Hessian matrix, which is computationally inefficient. To address this, the authors propose FORUM, a first-order multi-gradient method that reformulates MOBLO as a constrained multi-objective optimization problem using the value-function approach. They provide theoretical analysis showing the efficiency and non-asymptotic convergence of FORUM, and demonstrate its effectiveness and efficiency on three benchmark datasets for multi-task learning. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper solves a tricky math problem called Multi-Objective Bi-Level Optimization (MOBLO). MOBLO has two parts: one that tries to optimize multiple things at once, and another that just tries to find the best solution. The old way of solving this problem is slow because it requires lots of complicated calculations. To fix this, the researchers created a new method called FORUM that makes these calculations easier. They tested FORUM on some tricky math problems and found that it worked really well. |
Keywords
* Artificial intelligence * Multi task * Optimization