Loading Now

Summary of Clustering-based Validation Splits For Model Selection Under Domain Shift, by Andrea Napoli et al.


Clustering-Based Validation Splits for Model Selection under Domain Shift

by Andrea Napoli, Paul White

First submitted to arxiv on: 29 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computer Vision and Pattern Recognition (cs.CV)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper presents a novel method for model selection under domain shift, which is motivated by principles from distributionally robust optimisation (DRO) and domain adaptation theory. The authors propose maximizing the distribution mismatch between the training-validation split using the maximum mean discrepancy (MMD) as a measure of mismatch, reducing the partitioning problem to kernel k-means clustering. A constrained clustering algorithm is introduced, which leverages linear programming to control the size, label, and group distributions of the splits without requiring additional metadata, with convergence guarantees. Experimental results show that this technique outperforms alternative splitting strategies across various datasets and training algorithms for both domain generalisation (DG) and unsupervised domain adaptation (UDA) tasks.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about finding the best way to split data into two parts when there’s a big difference between them. It uses special math ideas to figure out the best split, which helps make predictions better. The authors tested this idea on lots of different datasets and it worked really well for many cases. This could be important for making machines that can understand different types of data.

Keywords

» Artificial intelligence  » Clustering  » Domain adaptation  » K means  » Unsupervised