Loading Now

Summary of Online Mirror Descent For Tchebycheff Scalarization in Multi-objective Optimization, by Meitong Liu et al.


Online Mirror Descent for Tchebycheff Scalarization in Multi-Objective Optimization

by Meitong Liu, Xiaoyuan Zhang, Chulin Xie, Kate Donahue, Han Zhao

First submitted to arxiv on: 29 Oct 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper addresses the limitations of linear scalarization in multi-objective optimization (MOO), a technique that combines multiple objectives into a single value for optimization. Recent work has shown that linear scalarization often fails to capture non-convex regions of the Pareto Front, resulting in incomplete recovery of Pareto optimal solutions. The authors propose an online mirror descent algorithm called OMD-TCH that optimizes for the worst-case objective using Tchebycheff scalarization. They demonstrate a convergence rate of O(sqrt(log m/T)) where m is the number of objectives and T is the iteration rounds. Additionally, they introduce an adaptive online-to-batch conversion scheme that improves the practical performance while maintaining the same guarantees. The authors evaluate OMD-TCH on synthetic problems and federated learning tasks under fairness constraints, achieving state-of-the-art results.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps solve a big problem in computer science called multi-objective optimization. Imagine you’re trying to make the best decision between two goals that are opposite or conflicting. This is like finding the perfect balance between being healthy and having fun on vacation. The current way of solving this problem, called linear scalarization, often doesn’t find all the possible good solutions because it’s too simple. The researchers propose a new method called OMD-TCH that does better by considering the worst-case scenario for each goal. They also come up with a clever way to turn their online algorithm into a batch algorithm that works even better in practice. The authors test their method on some fake problems and a real-world problem called federated learning, which is used to make sure different groups of people get fair results.

Keywords

» Artificial intelligence  » Federated learning  » Optimization