Loading Now

Summary of Pragmatic Feature Preferences: Learning Reward-relevant Preferences From Human Input, by Andi Peng and Yuying Sun and Tianmin Shu and David Abel


Pragmatic Feature Preferences: Learning Reward-Relevant Preferences from Human Input

by Andi Peng, Yuying Sun, Tianmin Shu, David Abel

First submitted to arxiv on: 23 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computation and Language (cs.CL)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes a novel approach to inferring reward models from preference data by extracting fine-grained information about why an example is preferred, inspired by human communication. The method enriches binary preference queries to ask both which features of an example are preferable and comparisons between examples themselves. The authors derive an approach for learning from these feature-level preferences in cases where users specify which features are reward-relevant or not. The evaluation on linear bandit settings in vision- and language-based domains shows that the approach quickly converges to accurate rewards with fewer comparisons compared to using example-only labels. Finally, a behavioral experiment on mushroom foraging validates the real-world applicability of the method.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper helps us understand how people learn from others and makes machines better at learning too! It’s like when you help your friend pick which video game to play by telling them why one game is better than another. The researchers want to make computers do this too, so they can learn what we want more quickly and accurately. They come up with a new way of asking questions that helps computers figure out why something is good or bad. This works really well in tests and even works with real people trying to pick the best mushrooms!

Keywords

» Artificial intelligence