Summary of Funnynet-w: Multimodal Learning Of Funny Moments in Videos in the Wild, by Zhi-song Liu et al.
FunnyNet-W: Multimodal Learning of Funny Moments in Videos in the Wild
by Zhi-Song Liu, Robin Courant, Vicky Kalogeiton
First submitted to arxiv on: 8 Jan 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Multimedia (cs.MM); Sound (cs.SD); Audio and Speech Processing (eess.AS)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A machine learning model called FunnyNet-W is proposed for automatically detecting funny moments in comedy videos. This task is challenging because funny moments can be related to various features such as body language, dialogue, and culture. The model relies on cross-attention and self-attention mechanisms for visual, audio, and text data. Unlike previous methods that rely on ground truth subtitles, FunnyNet-W uses naturally occurring modalities in videos, including video frames, audio, and automatically extracted text. To acquire labels for training, an unsupervised approach is proposed to spot and label funny audio moments. The model is evaluated on five datasets: TBBT, MHD, MUStARD, Friends, and UR-Funny. The results show that FunnyNet-W successfully exploits multimodal cues to identify funny moments, setting a new state-of-the-art for funny moment detection. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary A team of researchers has created a special computer program called FunnyNet-W that can find the funniest parts in comedy videos. This is important because laughter is contagious and makes us feel happy! The program uses three main types of data: what we see (video frames), what we hear (audio), and what we read (automatic text). It’s like using our eyes, ears, and brain to understand humor. To teach the program, they came up with a clever way to spot funny moments without relying on human labels. They tested FunnyNet-W on five different comedy shows and found that it does an excellent job of finding the funniest parts! |
Keywords
* Artificial intelligence * Cross attention * Machine learning * Self attention * Unsupervised