Summary of Imagery As Inquiry: Exploring a Multimodal Dataset For Conversational Recommendation, by Se-eun Yoon et al.
Imagery as Inquiry: Exploring A Multimodal Dataset for Conversational Recommendation
by Se-eun Yoon, Hyunsik Jeon, Julian McAuley
First submitted to arxiv on: 23 May 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper introduces a unique multimodal dataset where users express preferences through images, covering a wide range of visual expressions from landscapes to artistic depictions. The dataset is designed for two recommendation tasks: title generation and multiple-choice selection, with users requesting books or music that evoke similar feelings to those captured in the images. Recommendations are endorsed by the community through upvotes. Large foundation models are tested on this dataset, revealing their limitations in these tasks, particularly vision-language models showing no significant advantage over language-only counterparts using descriptions. The authors propose a new prompting method called chain-of-imagery, which leads to notable improvements. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper creates a special set of images that people use to tell us what they like or dislike. It’s like asking someone “what’s your favorite music?” and them showing you a picture instead of saying the name. The dataset has two main tasks: guessing song titles and choosing between multiple options. People vote on the recommended songs, so it’s like having a big party where everyone gets to choose what they want to listen to! Big language models are tested on this dataset and don’t do as well as expected, especially when trying to combine words with images. To fix this, the authors came up with a new way of asking the models questions that really helps them understand what people mean. |
Keywords
» Artificial intelligence » Prompting