Summary of Language-based User Profiles For Recommendation, by Joyce Zhou et al.
Language-Based User Profiles for Recommendation
by Joyce Zhou, Yijia Dai, Thorsten Joachims
First submitted to arxiv on: 23 Feb 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Human-Computer Interaction (cs.HC); Information Retrieval (cs.IR); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The Language-based Factorization Model (LFM) is an innovative approach to user profiling that leverages large language models (LLMs) to generate human-readable text summaries. By representing user profiles as natural-language descriptions, LFM addresses the limitations of traditional matrix factorization methods, which often struggle with interpretability and cold-start performance. The proposed encoder/decoder architecture combines two LLMs to produce a compact summary profile from user rating history, demonstrating improved accuracy in cold-start settings compared to matrix factorization. Additionally, generating human-readable summaries using LFM can perform competitively with direct LLM prediction while offering better interpretability and shorter model input length. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The Language-based Factorization Model is a new way to understand what people like. Most methods try to put users into categories based on how they behave, but this doesn’t work well when we don’t have much information about someone. This method uses big language models to create a short summary of what each person likes. We tested this approach and found that it works better than other methods in situations where we don’t have much data. It also helps us understand why people like something, which is important for making good recommendations. |
Keywords
* Artificial intelligence * Encoder decoder