Loading Now

Summary of Fils: Self-supervised Video Feature Prediction in Semantic Language Space, by Mona Ahmadian et al.


FILS: Self-Supervised Video Feature Prediction In Semantic Language Space

by Mona Ahmadian, Frank Guerin, Andrew Gilbert

First submitted to arxiv on: 5 Jun 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a self-supervised approach for learning semantic video representations by leveraging text-related content during pretraining. The authors present FILS, a novel model that can capture structured information by predicting masked feature semantics in language space using a patch-wise video-text contrastive strategy. This approach demonstrates remarkable transferability on downstream action recognition tasks, achieving state-of-the-art results on challenging egocentric datasets like Epic-Kitchens and Something-SomethingV2.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper teaches us how to make computers better at understanding videos by talking to them in their own language. The idea is to teach the computer to predict what’s happening in a video based on the text that describes it. This helps the computer learn more about the video, which makes it good at recognizing actions and events. The new approach, called FILS, works really well and beats other methods on tricky video recognition tasks.

Keywords

» Artificial intelligence  » Pretraining  » Self supervised  » Semantics  » Transferability