Summary of Docci: Descriptions Of Connected and Contrasting Images, by Yasumasa Onoe et al.
DOCCI: Descriptions of Connected and Contrasting Images
by Yasumasa Onoe, Sunayana Rane, Zachary Berger, Yonatan Bitton, Jaemin Cho, Roopal Garg, Alexander Ku, Zarana Parekh, Jordi Pont-Tuset, Garrett Tanzer, Su Wang, Jason Baldridge
First submitted to arxiv on: 30 Apr 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper introduces Descriptions of Connected and Contrasting Images (DOCCI), a novel dataset consisting of 15k images with human-annotated English descriptions. These descriptions are crafted to highlight challenges such as spatial relations, counting, text rendering, world knowledge, and more, averaging 136 words in length. The authors demonstrate the effectiveness of DOCCI for image-to-text generation by finetuning a PaLI 5B model, achieving equal or superior results compared to larger models like LLaVA-1.5 7B and InstructBLIP 7B. Additionally, DOCCI serves as a useful testbed for text-to-image generation, revealing limitations of current text-to-image models in capturing long descriptions and fine details. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper creates a new dataset with pictures and detailed descriptions to help machines learn about images. The descriptions are written by humans to make sure they accurately capture important things like shapes, counting, and textures. This dataset is useful for training machines to turn images into text or vice versa. The authors show that their dataset can be used to improve the performance of machine learning models. |
Keywords
» Artificial intelligence » Image generation » Machine learning » Text generation