Summary of Enhancing Cross-modal Contextual Congruence For Crowdfunding Success Using Knowledge-infused Learning, by Trilok Padhi et al.
Enhancing Cross-Modal Contextual Congruence for Crowdfunding Success using Knowledge-infused Learning
by Trilok Padhi, Ugur Kursuncu, Yaman Kumar, Valerie L. Shalin, Lane Peterson Fronczek
First submitted to arxiv on: 6 Feb 2024
Categories
- Main: Artificial Intelligence (cs.AI)
- Secondary: Computation and Language (cs.CL); Computer Vision and Pattern Recognition (cs.CV); Computers and Society (cs.CY); Human-Computer Interaction (cs.HC)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary In this paper, researchers tackle the challenge of modeling multimodal data that combines different types of content, such as text and images. They propose a novel approach that incorporates external knowledge from knowledge graphs to enhance the representation of multimodal data using compact Visual Language Models (VLMs). The goal is to predict the success of multi-modal crowdfunding campaigns by capturing the true holistic meaning of the multimodal content. The study shows that incorporating external knowledge bridges the semantic gap between text and image modalities, leading to improved predictive performance for campaign success. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper helps us understand how online content can be more effective in attracting users’ attention and engagement. Researchers found a way to use extra information from knowledge graphs to help computers better understand multimodal data, like videos or images with captions. This allows them to predict which crowdfunding campaigns will be successful. The study shows that when we add context to our understanding of online content, it can make a big difference in how well we can engage users and get them to support our projects. |
Keywords
» Artificial intelligence » Attention » Multi modal