Summary of Learning From Negative Feedback, or Positive Feedback or Both, by Abbas Abdolmaleki et al.
Learning from negative feedback, or positive feedback or both
by Abbas Abdolmaleki, Bilal Piot, Bobak Shahriari, Jost Tobias Springenberg, Tim Hertweck, Rishabh Joshi, Junhyuk Oh, Michael Bloesch, Thomas Lampe, Nicolas Heess, Jonas Buchli, Martin Riedmiller
First submitted to arxiv on: 5 Oct 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Machine Learning (stat.ML)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper presents a novel approach to preference optimization that decouples learning from positive and negative feedback, enabling control over the influence of each type. The method builds upon a probabilistic framework introduced in (Dayan and Hinton, 1997) and extends expectation-maximization algorithms to explicitly incorporate negative examples. This allows for stable learning from negative feedback alone, which is not well-addressed by current methods. The approach is evaluated for training language models based on human feedback as well as training policies for sequential decision-making problems. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper helps us learn better from feedback we get. Right now, many algorithms need both positive and negative examples to work well. But what if we only have one type of feedback? That’s where this new method comes in. It lets us control how much we use each type of feedback and even learn just from the negative ones alone! This is important because sometimes we might not have any positive examples, but still want to learn something. The researchers also fixed a problem with old algorithms that only looked at positive examples and ignored the negatives. They tested their new method on training language models and decision-making policies. |
Keywords
» Artificial intelligence » Optimization