Loading Now

Summary of Proxy-informed Bayesian Transfer Learning with Unknown Sources, by Sabina J. Sloman et al.


Proxy-informed Bayesian transfer learning with unknown sources

by Sabina J. Sloman, Julien Martinelli, Samuel Kaski

First submitted to arxiv on: 5 Nov 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes a method to address negative transfer, a phenomenon where machine learning models perform worse on target data after considering source data. The authors introduce a Bayesian perspective on negative transfer and develop the proxy-informed robust method for probabilistic transfer learning (PROMPT). PROMPT does not require prior knowledge of the source data and can be applied when differences between tasks are unobserved, making it useful in cases where only noisy indirect information is available. The proposed method makes use of proxy information, such as human feedback, to improve performance without requiring fine-tuning on target data. The authors’ theoretical results show that PROMPT is effective in mitigating the threat of negative transfer.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper solves a problem with machine learning models called negative transfer. Normally, when we train a model on one set of data and then use it to make predictions on another set, our model gets better at making those predictions. But sometimes, despite having more training, our model actually starts doing worse! The authors propose a new way to avoid this problem by using indirect information, like feedback from humans. This method is useful when we don’t know much about the differences between tasks or can’t get direct information from one task. It’s an important step forward in making machine learning models more reliable.

Keywords

» Artificial intelligence  » Fine tuning  » Machine learning  » Prompt  » Transfer learning