Loading Now

Summary of Common Pitfalls to Avoid While Using Multiobjective Optimization in Machine Learning, by Junaid Akhter et al.


Common pitfalls to avoid while using multiobjective optimization in machine learning

by Junaid Akhter, Paul David Fährmann, Konstantin Sonntag, Sebastian Peitz

First submitted to arxiv on: 2 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Optimization and Control (math.OC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper presents a comprehensive resource for machine learning (ML) practitioners seeking to utilize multiobjective optimization (MOO) techniques. It reviews previous studies on MOO in deep learning, identifies misconceptions, and highlights common pitfalls. The authors demonstrate the interplay between data loss and physics loss terms using Physics-Informed Neural Networks (PINNs) as a case study. They introduce well-known approaches like the weighted sum (WS) method and more complex techniques like multiobjective gradient descent algorithm (MGDA). Additionally, they compare results from WS and MGDA with NSGA-II, emphasizing the importance of understanding specific problems, objective spaces, and selected MOO methods.
Low GrooveSquid.com (original content) Low Difficulty Summary
MOO is a way to solve multiple problems at once. In machine learning, this means finding the best solution that balances different goals. This paper helps ML practitioners understand how to use MOO in their work. It reviews what others have done and shares common mistakes to avoid. The authors show an example of using MOO with a special type of neural network called PINNs. They also compare different ways of doing MOO, like weighted sum and multiobjective gradient descent.

Keywords

» Artificial intelligence  » Deep learning  » Gradient descent  » Machine learning  » Neural network  » Optimization