Summary of Ovr: a Dataset For Open Vocabulary Temporal Repetition Counting in Videos, by Debidatta Dwibedi et al.
OVR: A Dataset for Open Vocabulary Temporal Repetition Counting in Videos
by Debidatta Dwibedi, Yusuf Aytar, Jonathan Tompson, Andrew Zisserman
First submitted to arxiv on: 24 Jul 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary We introduce the OVR (Over) dataset, a comprehensive collection of annotations for temporal repetitions in videos. The dataset contains over 72K video annotations, specifying repetition number, start and end time, and free-form descriptions. Annotations are sourced from Kinetics and Ego4D, covering Exo and Ego viewing conditions with various actions and activities. OVR is significantly larger than previous datasets for video repetition counting. We also propose the OVRCounter transformer-based model, which can localize and count repetitions up to 320 frames long. The model is trained and evaluated on the OVR dataset, assessing performance with and without text-based target classes. Performance is compared to a prior repetition counting model. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper introduces a big database of video annotations that helps computers understand when things repeat in videos. It’s called the OVR dataset and has over 72,000 video annotations! These annotations tell us what’s repeating, how many times it happens, and even describe what it is. The videos come from two sources: Kinetics and Ego4D, which show different actions and activities. This database is special because it’s much bigger than previous ones for counting repetitions in videos. |
Keywords
* Artificial intelligence * Transformer