Loading Now

Summary of Seeking Consistent Flat Minima For Better Domain Generalization Via Refining Loss Landscapes, by Aodi Li et al.


Seeking Consistent Flat Minima for Better Domain Generalization via Refining Loss Landscapes

by Aodi Li, Liansheng Zhuang, Xiao Long, Minghong Yao, Shafei Wang

First submitted to arxiv on: 18 Dec 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
In this paper, researchers propose a novel approach to domain generalization called Self-Feedback Training (SFT), which aims to learn a model that can generalize well across multiple unseen test domains. The SFT framework iteratively refines loss landscapes during training by generating feedback signals based on the inconsistency of these landscapes in different domains. This consistency is achieved through a progressive refinement process, resulting in models that are simultaneously in optimal flat minima across all domains. Experimental results on DomainBed demonstrate superior performance of SFT compared to state-of-the-art sharpness-aware methods and other domain generalization baselines.
Low GrooveSquid.com (original content) Low Difficulty Summary
Domain generalization is the ability for a machine learning model to learn from multiple training domains and apply what it has learned to new, unseen test domains. This paper introduces an innovative approach called Self-Feedback Training (SFT) that helps models generalize better across different domains. SFT works by refining loss landscapes during training, which makes the model more consistent and accurate in its predictions.

Keywords

» Artificial intelligence  » Domain generalization  » Machine learning