Summary of Aligning Llm Agents by Learning Latent Preference From User Edits, By Ge Gao et al.
Aligning LLM Agents by Learning Latent Preference from User Edits
by Ge Gao, Alexey Taymanov, Eduardo Salinas, Paul Mineiro, Dipendra Misra
First submitted to arxiv on: 23 Apr 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI); Information Retrieval (cs.IR); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary As researchers explore ways to improve language models like Large Language Models (LLMs), we focus on interactive learning, where users edit the model’s output to personalize it based on their preferences. Our proposed framework, PRELUDE, infers user preferences from historic edit data, using these descriptions to generate responses in the future without requiring costly fine-tuning or performance degradation. To address complex and context-dependent user preferences, we introduce CIPHER, a simple algorithm that leverages LLMs to infer preferences for specific contexts. We evaluate our approach on summarization and email writing tasks, achieving lower edit distance costs with minimal overhead in LLM query costs. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Language models are getting better at generating text, but how do we make them more personalized to individual users? Researchers have found a way to learn user preferences by looking at how people edit the model’s output. This is called interactive learning. The goal is to use this information to generate responses that fit each person’s unique style and preferences. |
Keywords
» Artificial intelligence » Fine tuning » Summarization