Loading Now

Summary of Disentangling Likes and Dislikes in Personalized Generative Explainable Recommendation, by Ryotaro Shimizu et al.


Disentangling Likes and Dislikes in Personalized Generative Explainable Recommendation

by Ryotaro Shimizu, Takashi Wada, Yu Wang, Johannes Kruse, Sean O’Brien, Sai HtaungKham, Linxin Song, Yuya Yoshikawa, Yuki Saito, Fugee Tsung, Masayuki Goto, Julian McAuley

First submitted to arxiv on: 17 Oct 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Information Retrieval (cs.IR)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper introduces new evaluation methods for explainable recommendation systems, which currently focus on textual similarity between predicted and ground-truth explanations. The proposed approach assesses whether generated explanations accurately reflect users’ sentiments towards recommended items. To achieve this, the authors create new datasets by extracting users’ opinions from post-purchase reviews using a Large Language Model (LLM). The evaluation metrics consider two aspects: alignment with user sentiments and accurate identification of both positive and negative opinions. The study benchmarks several recent models on these datasets and finds that strong performance on existing metrics does not guarantee sentiment-aware explanations. Instead, incorporating users’ predicted ratings as input can improve the models’ ability to provide sentiment-aware explanations.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper is about making better recommendations by understanding what people like or dislike after they buy something. Currently, recommendation systems just try to generate a good explanation for why an item was recommended. But this doesn’t take into account how users really feel about it. The authors create new datasets that include people’s opinions and propose new ways to evaluate if the explanations are accurate. They find that current methods don’t work well and that giving models more information about what people might like or dislike can make them better at understanding user sentiments.

Keywords

» Artificial intelligence  » Alignment  » Large language model