Summary of Metric-dst: Mitigating Selection Bias Through Diversity-guided Semi-supervised Metric Learning, by Yasin I. Tepeli et al.
Metric-DST: Mitigating Selection Bias Through Diversity-Guided Semi-Supervised Metric Learning
by Yasin I. Tepeli, Mathijs de Wolf, Joana P. Gonçalves
First submitted to arxiv on: 27 Nov 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary In this paper, researchers tackle the issue of selection bias in machine learning models, which can lead to undesirable behavior for underrepresented profiles. Semi-supervised learning strategies like self-training can help mitigate this problem by incorporating unlabeled data into model training. However, conventional self-training methods that focus on high-confidence data samples may actually reinforce existing biases and compromise effectiveness. The authors propose a new approach called Metric-DST, which uses metric learning to include more diverse samples in the training process. This strategy was tested on several datasets with induced bias, including generated and real-world data, as well as a molecular biology prediction task with intrinsic bias. The results show that Metric-DST can learn more robust models that are less biased than conventional self-training approaches. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Machine learning models can have problems when they’re trained on data that’s not representative of the whole population. This can make them behave badly for people or groups who aren’t well-represented in the training data. One way to fix this is by using semi-supervised learning, which includes unlabeled data in the training process. But some methods that do this can actually make things worse by focusing on the most confident data points, which may be biased towards certain groups. The authors of this paper propose a new approach called Metric-DST, which uses special techniques to include more diverse samples in the training process. This helps the model learn to be fairer and less biased. |
Keywords
» Artificial intelligence » Machine learning » Self training » Semi supervised