Summary of Value Internalization: Learning and Generalizing From Social Reward, by Frieda Rong and Max Kleiman-weiner
Value Internalization: Learning and Generalizing from Social Reward
by Frieda Rong, Max Kleiman-Weiner
First submitted to arxiv on: 19 Jul 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Multiagent Systems (cs.MA)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
| Summary difficulty | Written by | Summary |
|---|---|---|
| High | Paper authors | High Difficulty Summary Read the original abstract here |
| Medium | GrooveSquid.com (original content) | Medium Difficulty Summary In this paper, researchers explore how social rewards shape human behavior and propose a novel model called value internalization to understand how these behaviors persist and generalize when the caregiver is no longer present. The model relies on an internal social reward (ISR) that generates internal rewards when social rewards are unavailable. Empirical simulations demonstrate that the ISR model prevents agents from unlearning socialized behaviors and enables generalization in out-of-distribution tasks. The study also highlights the implications of incomplete internalization, which can lead to “reward hacking” on the ISR. Furthermore, the authors show that their model internalizes prosocial behavior in a multi-agent environment. |
| Low | GrooveSquid.com (original content) | Low Difficulty Summary Social rewards are what make us do things because they’re good for others or society as a whole. When we’re young, our caregivers help us learn behaviors that align with these rewards. But how do these behaviors stick around when our caregiver isn’t there anymore? Researchers propose an “internal social reward” (ISR) model to figure this out. It’s like having a little inner voice that gives you a sense of accomplishment and happiness when you make good choices, even if no one else is watching. This helps us keep learning and adapting without needing constant feedback from others. |
Keywords
* Artificial intelligence * Generalization




